00:00:00.001 Started by upstream project "autotest-per-patch" build number 122946 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.013 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.014 The recommended git tool is: git 00:00:00.014 using credential 00000000-0000-0000-0000-000000000002 00:00:00.016 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.028 Fetching changes from the remote Git repository 00:00:00.030 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.044 Using shallow fetch with depth 1 00:00:00.044 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.044 > git --version # timeout=10 00:00:00.058 > git --version # 'git version 2.39.2' 00:00:00.059 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.059 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.059 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.993 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.004 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.017 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:04.017 > git config core.sparsecheckout # timeout=10 00:00:04.027 > git read-tree -mu HEAD # timeout=10 00:00:04.045 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:04.063 Commit message: "inventory/dev: add missing long names" 00:00:04.063 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:04.142 [Pipeline] Start of Pipeline 00:00:04.157 [Pipeline] library 00:00:04.159 Loading library shm_lib@master 00:00:04.785 Library shm_lib@master is cached. Copying from home. 00:00:04.814 [Pipeline] node 00:00:04.853 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.855 [Pipeline] { 00:00:04.868 [Pipeline] catchError 00:00:04.870 [Pipeline] { 00:00:04.885 [Pipeline] wrap 00:00:04.897 [Pipeline] { 00:00:04.906 [Pipeline] stage 00:00:04.909 [Pipeline] { (Prologue) 00:00:05.108 [Pipeline] sh 00:00:05.395 + logger -p user.info -t JENKINS-CI 00:00:05.418 [Pipeline] echo 00:00:05.419 Node: CYP12 00:00:05.427 [Pipeline] sh 00:00:05.728 [Pipeline] setCustomBuildProperty 00:00:05.738 [Pipeline] echo 00:00:05.739 Cleanup processes 00:00:05.743 [Pipeline] sh 00:00:06.027 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.027 3235857 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.039 [Pipeline] sh 00:00:06.322 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.322 ++ grep -v 'sudo pgrep' 00:00:06.322 ++ awk '{print $1}' 00:00:06.322 + sudo kill -9 00:00:06.322 + true 00:00:06.338 [Pipeline] cleanWs 00:00:06.347 [WS-CLEANUP] Deleting project workspace... 00:00:06.347 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.354 [WS-CLEANUP] done 00:00:06.358 [Pipeline] setCustomBuildProperty 00:00:06.370 [Pipeline] sh 00:00:06.652 + sudo git config --global --replace-all safe.directory '*' 00:00:06.743 [Pipeline] nodesByLabel 00:00:06.744 Found a total of 1 nodes with the 'sorcerer' label 00:00:06.755 [Pipeline] httpRequest 00:00:06.759 HttpMethod: GET 00:00:06.760 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:06.763 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:06.767 Response Code: HTTP/1.1 200 OK 00:00:06.767 Success: Status code 200 is in the accepted range: 200,404 00:00:06.768 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:08.331 [Pipeline] sh 00:00:08.617 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:08.638 [Pipeline] httpRequest 00:00:08.643 HttpMethod: GET 00:00:08.644 URL: http://10.211.164.101/packages/spdk_7f5235167296e0a0acdce5d1283d3624a55fce0c.tar.gz 00:00:08.645 Sending request to url: http://10.211.164.101/packages/spdk_7f5235167296e0a0acdce5d1283d3624a55fce0c.tar.gz 00:00:08.660 Response Code: HTTP/1.1 200 OK 00:00:08.660 Success: Status code 200 is in the accepted range: 200,404 00:00:08.660 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_7f5235167296e0a0acdce5d1283d3624a55fce0c.tar.gz 00:01:04.292 [Pipeline] sh 00:01:04.573 + tar --no-same-owner -xf spdk_7f5235167296e0a0acdce5d1283d3624a55fce0c.tar.gz 00:01:07.886 [Pipeline] sh 00:01:08.170 + git -C spdk log --oneline -n5 00:01:08.170 7f5235167 raid: utility function to get a base bdev in io context 00:01:08.170 41b20d3a5 raid: write sb earlier when removing a base bdev 00:01:08.170 f7b4bc85c raid: move the raid I/O wrappers back to the header 00:01:08.170 c7a82f3a8 ut/raid: move out raid0-specific tests to separate file 00:01:08.170 d1c04ac68 ut/raid: make the common ut functions public 00:01:08.182 [Pipeline] } 00:01:08.200 [Pipeline] // stage 00:01:08.207 [Pipeline] stage 00:01:08.209 [Pipeline] { (Prepare) 00:01:08.226 [Pipeline] writeFile 00:01:08.243 [Pipeline] sh 00:01:08.528 + logger -p user.info -t JENKINS-CI 00:01:08.541 [Pipeline] sh 00:01:08.825 + logger -p user.info -t JENKINS-CI 00:01:08.852 [Pipeline] sh 00:01:09.137 + cat autorun-spdk.conf 00:01:09.137 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.137 SPDK_TEST_NVMF=1 00:01:09.137 SPDK_TEST_NVME_CLI=1 00:01:09.137 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.137 SPDK_TEST_NVMF_NICS=e810 00:01:09.137 SPDK_TEST_VFIOUSER=1 00:01:09.137 SPDK_RUN_UBSAN=1 00:01:09.137 NET_TYPE=phy 00:01:09.145 RUN_NIGHTLY=0 00:01:09.150 [Pipeline] readFile 00:01:09.175 [Pipeline] withEnv 00:01:09.176 [Pipeline] { 00:01:09.190 [Pipeline] sh 00:01:09.478 + set -ex 00:01:09.478 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:09.478 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:09.478 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.478 ++ SPDK_TEST_NVMF=1 00:01:09.478 ++ SPDK_TEST_NVME_CLI=1 00:01:09.478 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.478 ++ SPDK_TEST_NVMF_NICS=e810 00:01:09.478 ++ SPDK_TEST_VFIOUSER=1 00:01:09.478 ++ SPDK_RUN_UBSAN=1 00:01:09.478 ++ NET_TYPE=phy 00:01:09.478 ++ RUN_NIGHTLY=0 00:01:09.478 + case $SPDK_TEST_NVMF_NICS in 00:01:09.478 + DRIVERS=ice 00:01:09.478 + [[ tcp == \r\d\m\a ]] 00:01:09.478 + [[ -n ice ]] 00:01:09.478 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:09.478 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:09.478 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:09.478 rmmod: ERROR: Module irdma is not currently loaded 00:01:09.478 rmmod: ERROR: Module i40iw is not currently loaded 00:01:09.478 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:09.478 + true 00:01:09.478 + for D in $DRIVERS 00:01:09.478 + sudo modprobe ice 00:01:09.478 + exit 0 00:01:09.489 [Pipeline] } 00:01:09.507 [Pipeline] // withEnv 00:01:09.512 [Pipeline] } 00:01:09.529 [Pipeline] // stage 00:01:09.538 [Pipeline] catchError 00:01:09.540 [Pipeline] { 00:01:09.554 [Pipeline] timeout 00:01:09.555 Timeout set to expire in 40 min 00:01:09.556 [Pipeline] { 00:01:09.571 [Pipeline] stage 00:01:09.572 [Pipeline] { (Tests) 00:01:09.588 [Pipeline] sh 00:01:09.912 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.912 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.912 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.912 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:09.912 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:09.912 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:09.912 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:09.912 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:09.912 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:09.912 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:09.912 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.912 + source /etc/os-release 00:01:09.912 ++ NAME='Fedora Linux' 00:01:09.912 ++ VERSION='38 (Cloud Edition)' 00:01:09.912 ++ ID=fedora 00:01:09.912 ++ VERSION_ID=38 00:01:09.912 ++ VERSION_CODENAME= 00:01:09.912 ++ PLATFORM_ID=platform:f38 00:01:09.912 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:09.912 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:09.912 ++ LOGO=fedora-logo-icon 00:01:09.912 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:09.912 ++ HOME_URL=https://fedoraproject.org/ 00:01:09.912 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:09.912 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:09.912 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:09.912 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:09.912 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:09.912 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:09.912 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:09.912 ++ SUPPORT_END=2024-05-14 00:01:09.912 ++ VARIANT='Cloud Edition' 00:01:09.912 ++ VARIANT_ID=cloud 00:01:09.912 + uname -a 00:01:09.912 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:09.912 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:13.215 Hugepages 00:01:13.215 node hugesize free / total 00:01:13.215 node0 1048576kB 0 / 0 00:01:13.215 node0 2048kB 0 / 0 00:01:13.215 node1 1048576kB 0 / 0 00:01:13.215 node1 2048kB 0 / 0 00:01:13.215 00:01:13.215 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:13.215 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:13.215 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:13.215 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:13.215 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:13.215 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:13.215 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:13.215 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:13.215 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:13.215 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:13.215 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:13.215 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:13.215 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:13.476 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:13.476 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:13.476 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:13.476 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:13.476 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:13.476 + rm -f /tmp/spdk-ld-path 00:01:13.476 + source autorun-spdk.conf 00:01:13.476 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.476 ++ SPDK_TEST_NVMF=1 00:01:13.476 ++ SPDK_TEST_NVME_CLI=1 00:01:13.476 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.476 ++ SPDK_TEST_NVMF_NICS=e810 00:01:13.476 ++ SPDK_TEST_VFIOUSER=1 00:01:13.476 ++ SPDK_RUN_UBSAN=1 00:01:13.476 ++ NET_TYPE=phy 00:01:13.476 ++ RUN_NIGHTLY=0 00:01:13.476 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:13.476 + [[ -n '' ]] 00:01:13.476 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.476 + for M in /var/spdk/build-*-manifest.txt 00:01:13.476 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:13.476 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.476 + for M in /var/spdk/build-*-manifest.txt 00:01:13.476 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:13.476 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.476 ++ uname 00:01:13.476 + [[ Linux == \L\i\n\u\x ]] 00:01:13.476 + sudo dmesg -T 00:01:13.476 + sudo dmesg --clear 00:01:13.476 + dmesg_pid=3236947 00:01:13.476 + [[ Fedora Linux == FreeBSD ]] 00:01:13.476 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.476 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.476 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:13.476 + [[ -x /usr/src/fio-static/fio ]] 00:01:13.476 + export FIO_BIN=/usr/src/fio-static/fio 00:01:13.476 + FIO_BIN=/usr/src/fio-static/fio 00:01:13.476 + sudo dmesg -Tw 00:01:13.476 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:13.476 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:13.476 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:13.476 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.476 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.476 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:13.476 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.476 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.476 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.476 Test configuration: 00:01:13.476 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.476 SPDK_TEST_NVMF=1 00:01:13.476 SPDK_TEST_NVME_CLI=1 00:01:13.476 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.476 SPDK_TEST_NVMF_NICS=e810 00:01:13.476 SPDK_TEST_VFIOUSER=1 00:01:13.476 SPDK_RUN_UBSAN=1 00:01:13.476 NET_TYPE=phy 00:01:13.737 RUN_NIGHTLY=0 19:16:39 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:13.737 19:16:39 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:13.737 19:16:39 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:13.737 19:16:39 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:13.737 19:16:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.737 19:16:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.737 19:16:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.737 19:16:39 -- paths/export.sh@5 -- $ export PATH 00:01:13.737 19:16:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.737 19:16:39 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:13.737 19:16:39 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:13.737 19:16:39 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715793399.XXXXXX 00:01:13.737 19:16:39 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715793399.ZTKhgl 00:01:13.737 19:16:39 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:13.737 19:16:39 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:13.737 19:16:39 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:13.737 19:16:39 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:13.737 19:16:39 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:13.737 19:16:39 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:13.737 19:16:39 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:13.737 19:16:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.737 19:16:39 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:13.737 19:16:39 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:13.737 19:16:39 -- pm/common@17 -- $ local monitor 00:01:13.737 19:16:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.737 19:16:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.737 19:16:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.737 19:16:39 -- pm/common@21 -- $ date +%s 00:01:13.737 19:16:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.737 19:16:39 -- pm/common@21 -- $ date +%s 00:01:13.737 19:16:39 -- pm/common@25 -- $ sleep 1 00:01:13.737 19:16:39 -- pm/common@21 -- $ date +%s 00:01:13.737 19:16:39 -- pm/common@21 -- $ date +%s 00:01:13.737 19:16:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715793399 00:01:13.737 19:16:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715793399 00:01:13.737 19:16:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715793399 00:01:13.737 19:16:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715793399 00:01:13.737 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715793399_collect-vmstat.pm.log 00:01:13.737 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715793399_collect-cpu-load.pm.log 00:01:13.737 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715793399_collect-cpu-temp.pm.log 00:01:13.737 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715793399_collect-bmc-pm.bmc.pm.log 00:01:14.744 19:16:40 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:14.744 19:16:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:14.744 19:16:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:14.744 19:16:40 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.744 19:16:40 -- spdk/autobuild.sh@16 -- $ date -u 00:01:14.744 Wed May 15 05:16:40 PM UTC 2024 00:01:14.744 19:16:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:14.744 v24.05-pre-668-g7f5235167 00:01:14.744 19:16:40 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:14.744 19:16:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:14.744 19:16:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:14.744 19:16:40 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:14.744 19:16:40 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:14.744 19:16:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.744 ************************************ 00:01:14.744 START TEST ubsan 00:01:14.744 ************************************ 00:01:14.744 19:16:40 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:14.744 using ubsan 00:01:14.744 00:01:14.744 real 0m0.000s 00:01:14.744 user 0m0.000s 00:01:14.744 sys 0m0.000s 00:01:14.744 19:16:40 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:14.744 19:16:40 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:14.744 ************************************ 00:01:14.744 END TEST ubsan 00:01:14.744 ************************************ 00:01:14.744 19:16:40 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:14.744 19:16:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:14.744 19:16:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:14.744 19:16:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:14.744 19:16:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:14.744 19:16:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:14.744 19:16:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:14.744 19:16:40 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:14.744 19:16:40 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:15.004 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:15.004 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:15.264 Using 'verbs' RDMA provider 00:01:31.128 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:43.364 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:43.364 Creating mk/config.mk...done. 00:01:43.624 Creating mk/cc.flags.mk...done. 00:01:43.624 Type 'make' to build. 00:01:43.624 19:17:09 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:43.624 19:17:09 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:43.624 19:17:09 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:43.624 19:17:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.624 ************************************ 00:01:43.624 START TEST make 00:01:43.624 ************************************ 00:01:43.624 19:17:09 make -- common/autotest_common.sh@1121 -- $ make -j144 00:01:43.885 make[1]: Nothing to be done for 'all'. 00:01:45.273 The Meson build system 00:01:45.273 Version: 1.3.1 00:01:45.273 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:45.273 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.273 Build type: native build 00:01:45.273 Project name: libvfio-user 00:01:45.273 Project version: 0.0.1 00:01:45.273 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:45.273 C linker for the host machine: cc ld.bfd 2.39-16 00:01:45.273 Host machine cpu family: x86_64 00:01:45.273 Host machine cpu: x86_64 00:01:45.273 Run-time dependency threads found: YES 00:01:45.273 Library dl found: YES 00:01:45.273 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:45.273 Run-time dependency json-c found: YES 0.17 00:01:45.273 Run-time dependency cmocka found: YES 1.1.7 00:01:45.273 Program pytest-3 found: NO 00:01:45.273 Program flake8 found: NO 00:01:45.273 Program misspell-fixer found: NO 00:01:45.273 Program restructuredtext-lint found: NO 00:01:45.273 Program valgrind found: YES (/usr/bin/valgrind) 00:01:45.273 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:45.273 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:45.273 Compiler for C supports arguments -Wwrite-strings: YES 00:01:45.273 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:45.273 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:45.273 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:45.273 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:45.273 Build targets in project: 8 00:01:45.273 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:45.273 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:45.273 00:01:45.273 libvfio-user 0.0.1 00:01:45.273 00:01:45.273 User defined options 00:01:45.273 buildtype : debug 00:01:45.273 default_library: shared 00:01:45.273 libdir : /usr/local/lib 00:01:45.273 00:01:45.273 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:45.531 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:45.789 [1/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:45.789 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:45.789 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:45.789 [4/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:45.789 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:45.789 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:45.789 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:45.789 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:45.789 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:45.789 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:45.789 [11/37] Compiling C object samples/null.p/null.c.o 00:01:45.789 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:45.789 [13/37] Compiling C object samples/server.p/server.c.o 00:01:45.789 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:45.789 [15/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:45.789 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:45.789 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:45.789 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:45.789 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:45.789 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:45.789 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:45.789 [22/37] Compiling C object samples/client.p/client.c.o 00:01:45.789 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:45.789 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:45.789 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:45.789 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:45.789 [27/37] Linking target samples/client 00:01:45.789 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:45.789 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:45.789 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:45.789 [31/37] Linking target test/unit_tests 00:01:46.048 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:46.048 [33/37] Linking target samples/shadow_ioeventfd_server 00:01:46.048 [34/37] Linking target samples/null 00:01:46.048 [35/37] Linking target samples/gpio-pci-idio-16 00:01:46.048 [36/37] Linking target samples/lspci 00:01:46.048 [37/37] Linking target samples/server 00:01:46.048 INFO: autodetecting backend as ninja 00:01:46.048 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:46.048 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:46.618 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:46.618 ninja: no work to do. 00:01:51.912 The Meson build system 00:01:51.912 Version: 1.3.1 00:01:51.912 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:51.912 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:51.912 Build type: native build 00:01:51.912 Program cat found: YES (/usr/bin/cat) 00:01:51.912 Project name: DPDK 00:01:51.912 Project version: 23.11.0 00:01:51.912 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:51.912 C linker for the host machine: cc ld.bfd 2.39-16 00:01:51.912 Host machine cpu family: x86_64 00:01:51.912 Host machine cpu: x86_64 00:01:51.912 Message: ## Building in Developer Mode ## 00:01:51.912 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:51.912 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:51.913 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:51.913 Program python3 found: YES (/usr/bin/python3) 00:01:51.913 Program cat found: YES (/usr/bin/cat) 00:01:51.913 Compiler for C supports arguments -march=native: YES 00:01:51.913 Checking for size of "void *" : 8 00:01:51.913 Checking for size of "void *" : 8 (cached) 00:01:51.913 Library m found: YES 00:01:51.913 Library numa found: YES 00:01:51.913 Has header "numaif.h" : YES 00:01:51.913 Library fdt found: NO 00:01:51.913 Library execinfo found: NO 00:01:51.913 Has header "execinfo.h" : YES 00:01:51.913 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:51.913 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:51.913 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:51.913 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:51.913 Run-time dependency openssl found: YES 3.0.9 00:01:51.913 Run-time dependency libpcap found: YES 1.10.4 00:01:51.913 Has header "pcap.h" with dependency libpcap: YES 00:01:51.913 Compiler for C supports arguments -Wcast-qual: YES 00:01:51.913 Compiler for C supports arguments -Wdeprecated: YES 00:01:51.913 Compiler for C supports arguments -Wformat: YES 00:01:51.913 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:51.913 Compiler for C supports arguments -Wformat-security: NO 00:01:51.913 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:51.913 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:51.913 Compiler for C supports arguments -Wnested-externs: YES 00:01:51.913 Compiler for C supports arguments -Wold-style-definition: YES 00:01:51.913 Compiler for C supports arguments -Wpointer-arith: YES 00:01:51.913 Compiler for C supports arguments -Wsign-compare: YES 00:01:51.913 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:51.913 Compiler for C supports arguments -Wundef: YES 00:01:51.913 Compiler for C supports arguments -Wwrite-strings: YES 00:01:51.913 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:51.913 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:51.913 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:51.913 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:51.913 Program objdump found: YES (/usr/bin/objdump) 00:01:51.913 Compiler for C supports arguments -mavx512f: YES 00:01:51.913 Checking if "AVX512 checking" compiles: YES 00:01:51.913 Fetching value of define "__SSE4_2__" : 1 00:01:51.913 Fetching value of define "__AES__" : 1 00:01:51.913 Fetching value of define "__AVX__" : 1 00:01:51.913 Fetching value of define "__AVX2__" : 1 00:01:51.913 Fetching value of define "__AVX512BW__" : 1 00:01:51.913 Fetching value of define "__AVX512CD__" : 1 00:01:51.913 Fetching value of define "__AVX512DQ__" : 1 00:01:51.913 Fetching value of define "__AVX512F__" : 1 00:01:51.913 Fetching value of define "__AVX512VL__" : 1 00:01:51.913 Fetching value of define "__PCLMUL__" : 1 00:01:51.913 Fetching value of define "__RDRND__" : 1 00:01:51.913 Fetching value of define "__RDSEED__" : 1 00:01:51.913 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:51.913 Fetching value of define "__znver1__" : (undefined) 00:01:51.913 Fetching value of define "__znver2__" : (undefined) 00:01:51.913 Fetching value of define "__znver3__" : (undefined) 00:01:51.913 Fetching value of define "__znver4__" : (undefined) 00:01:51.913 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:51.913 Message: lib/log: Defining dependency "log" 00:01:51.913 Message: lib/kvargs: Defining dependency "kvargs" 00:01:51.913 Message: lib/telemetry: Defining dependency "telemetry" 00:01:51.913 Checking for function "getentropy" : NO 00:01:51.913 Message: lib/eal: Defining dependency "eal" 00:01:51.913 Message: lib/ring: Defining dependency "ring" 00:01:51.913 Message: lib/rcu: Defining dependency "rcu" 00:01:51.913 Message: lib/mempool: Defining dependency "mempool" 00:01:51.913 Message: lib/mbuf: Defining dependency "mbuf" 00:01:51.913 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:51.913 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:51.913 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:51.913 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:51.913 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:51.913 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:51.913 Compiler for C supports arguments -mpclmul: YES 00:01:51.913 Compiler for C supports arguments -maes: YES 00:01:51.913 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:51.913 Compiler for C supports arguments -mavx512bw: YES 00:01:51.913 Compiler for C supports arguments -mavx512dq: YES 00:01:51.913 Compiler for C supports arguments -mavx512vl: YES 00:01:51.913 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:51.913 Compiler for C supports arguments -mavx2: YES 00:01:51.913 Compiler for C supports arguments -mavx: YES 00:01:51.913 Message: lib/net: Defining dependency "net" 00:01:51.913 Message: lib/meter: Defining dependency "meter" 00:01:51.913 Message: lib/ethdev: Defining dependency "ethdev" 00:01:51.913 Message: lib/pci: Defining dependency "pci" 00:01:51.913 Message: lib/cmdline: Defining dependency "cmdline" 00:01:51.913 Message: lib/hash: Defining dependency "hash" 00:01:51.913 Message: lib/timer: Defining dependency "timer" 00:01:51.913 Message: lib/compressdev: Defining dependency "compressdev" 00:01:51.913 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:51.913 Message: lib/dmadev: Defining dependency "dmadev" 00:01:51.913 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:51.913 Message: lib/power: Defining dependency "power" 00:01:51.913 Message: lib/reorder: Defining dependency "reorder" 00:01:51.913 Message: lib/security: Defining dependency "security" 00:01:51.913 Has header "linux/userfaultfd.h" : YES 00:01:51.913 Has header "linux/vduse.h" : YES 00:01:51.913 Message: lib/vhost: Defining dependency "vhost" 00:01:51.913 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:51.913 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:51.913 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:51.913 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:51.913 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:51.913 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:51.913 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:51.913 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:51.913 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:51.913 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:51.913 Program doxygen found: YES (/usr/bin/doxygen) 00:01:51.913 Configuring doxy-api-html.conf using configuration 00:01:51.913 Configuring doxy-api-man.conf using configuration 00:01:51.913 Program mandb found: YES (/usr/bin/mandb) 00:01:51.913 Program sphinx-build found: NO 00:01:51.913 Configuring rte_build_config.h using configuration 00:01:51.913 Message: 00:01:51.913 ================= 00:01:51.913 Applications Enabled 00:01:51.913 ================= 00:01:51.913 00:01:51.913 apps: 00:01:51.913 00:01:51.913 00:01:51.913 Message: 00:01:51.913 ================= 00:01:51.913 Libraries Enabled 00:01:51.913 ================= 00:01:51.913 00:01:51.913 libs: 00:01:51.913 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:51.913 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:51.913 cryptodev, dmadev, power, reorder, security, vhost, 00:01:51.913 00:01:51.913 Message: 00:01:51.913 =============== 00:01:51.913 Drivers Enabled 00:01:51.913 =============== 00:01:51.913 00:01:51.913 common: 00:01:51.913 00:01:51.913 bus: 00:01:51.913 pci, vdev, 00:01:51.913 mempool: 00:01:51.913 ring, 00:01:51.913 dma: 00:01:51.913 00:01:51.913 net: 00:01:51.913 00:01:51.913 crypto: 00:01:51.913 00:01:51.913 compress: 00:01:51.913 00:01:51.913 vdpa: 00:01:51.913 00:01:51.913 00:01:51.913 Message: 00:01:51.913 ================= 00:01:51.913 Content Skipped 00:01:51.913 ================= 00:01:51.913 00:01:51.913 apps: 00:01:51.913 dumpcap: explicitly disabled via build config 00:01:51.913 graph: explicitly disabled via build config 00:01:51.913 pdump: explicitly disabled via build config 00:01:51.913 proc-info: explicitly disabled via build config 00:01:51.913 test-acl: explicitly disabled via build config 00:01:51.913 test-bbdev: explicitly disabled via build config 00:01:51.913 test-cmdline: explicitly disabled via build config 00:01:51.913 test-compress-perf: explicitly disabled via build config 00:01:51.913 test-crypto-perf: explicitly disabled via build config 00:01:51.913 test-dma-perf: explicitly disabled via build config 00:01:51.913 test-eventdev: explicitly disabled via build config 00:01:51.913 test-fib: explicitly disabled via build config 00:01:51.913 test-flow-perf: explicitly disabled via build config 00:01:51.913 test-gpudev: explicitly disabled via build config 00:01:51.913 test-mldev: explicitly disabled via build config 00:01:51.913 test-pipeline: explicitly disabled via build config 00:01:51.913 test-pmd: explicitly disabled via build config 00:01:51.913 test-regex: explicitly disabled via build config 00:01:51.913 test-sad: explicitly disabled via build config 00:01:51.913 test-security-perf: explicitly disabled via build config 00:01:51.913 00:01:51.913 libs: 00:01:51.913 metrics: explicitly disabled via build config 00:01:51.913 acl: explicitly disabled via build config 00:01:51.913 bbdev: explicitly disabled via build config 00:01:51.913 bitratestats: explicitly disabled via build config 00:01:51.913 bpf: explicitly disabled via build config 00:01:51.913 cfgfile: explicitly disabled via build config 00:01:51.913 distributor: explicitly disabled via build config 00:01:51.913 efd: explicitly disabled via build config 00:01:51.913 eventdev: explicitly disabled via build config 00:01:51.913 dispatcher: explicitly disabled via build config 00:01:51.913 gpudev: explicitly disabled via build config 00:01:51.913 gro: explicitly disabled via build config 00:01:51.913 gso: explicitly disabled via build config 00:01:51.913 ip_frag: explicitly disabled via build config 00:01:51.913 jobstats: explicitly disabled via build config 00:01:51.913 latencystats: explicitly disabled via build config 00:01:51.913 lpm: explicitly disabled via build config 00:01:51.913 member: explicitly disabled via build config 00:01:51.913 pcapng: explicitly disabled via build config 00:01:51.913 rawdev: explicitly disabled via build config 00:01:51.913 regexdev: explicitly disabled via build config 00:01:51.913 mldev: explicitly disabled via build config 00:01:51.913 rib: explicitly disabled via build config 00:01:51.913 sched: explicitly disabled via build config 00:01:51.913 stack: explicitly disabled via build config 00:01:51.913 ipsec: explicitly disabled via build config 00:01:51.913 pdcp: explicitly disabled via build config 00:01:51.914 fib: explicitly disabled via build config 00:01:51.914 port: explicitly disabled via build config 00:01:51.914 pdump: explicitly disabled via build config 00:01:51.914 table: explicitly disabled via build config 00:01:51.914 pipeline: explicitly disabled via build config 00:01:51.914 graph: explicitly disabled via build config 00:01:51.914 node: explicitly disabled via build config 00:01:51.914 00:01:51.914 drivers: 00:01:51.914 common/cpt: not in enabled drivers build config 00:01:51.914 common/dpaax: not in enabled drivers build config 00:01:51.914 common/iavf: not in enabled drivers build config 00:01:51.914 common/idpf: not in enabled drivers build config 00:01:51.914 common/mvep: not in enabled drivers build config 00:01:51.914 common/octeontx: not in enabled drivers build config 00:01:51.914 bus/auxiliary: not in enabled drivers build config 00:01:51.914 bus/cdx: not in enabled drivers build config 00:01:51.914 bus/dpaa: not in enabled drivers build config 00:01:51.914 bus/fslmc: not in enabled drivers build config 00:01:51.914 bus/ifpga: not in enabled drivers build config 00:01:51.914 bus/platform: not in enabled drivers build config 00:01:51.914 bus/vmbus: not in enabled drivers build config 00:01:51.914 common/cnxk: not in enabled drivers build config 00:01:51.914 common/mlx5: not in enabled drivers build config 00:01:51.914 common/nfp: not in enabled drivers build config 00:01:51.914 common/qat: not in enabled drivers build config 00:01:51.914 common/sfc_efx: not in enabled drivers build config 00:01:51.914 mempool/bucket: not in enabled drivers build config 00:01:51.914 mempool/cnxk: not in enabled drivers build config 00:01:51.914 mempool/dpaa: not in enabled drivers build config 00:01:51.914 mempool/dpaa2: not in enabled drivers build config 00:01:51.914 mempool/octeontx: not in enabled drivers build config 00:01:51.914 mempool/stack: not in enabled drivers build config 00:01:51.914 dma/cnxk: not in enabled drivers build config 00:01:51.914 dma/dpaa: not in enabled drivers build config 00:01:51.914 dma/dpaa2: not in enabled drivers build config 00:01:51.914 dma/hisilicon: not in enabled drivers build config 00:01:51.914 dma/idxd: not in enabled drivers build config 00:01:51.914 dma/ioat: not in enabled drivers build config 00:01:51.914 dma/skeleton: not in enabled drivers build config 00:01:51.914 net/af_packet: not in enabled drivers build config 00:01:51.914 net/af_xdp: not in enabled drivers build config 00:01:51.914 net/ark: not in enabled drivers build config 00:01:51.914 net/atlantic: not in enabled drivers build config 00:01:51.914 net/avp: not in enabled drivers build config 00:01:51.914 net/axgbe: not in enabled drivers build config 00:01:51.914 net/bnx2x: not in enabled drivers build config 00:01:51.914 net/bnxt: not in enabled drivers build config 00:01:51.914 net/bonding: not in enabled drivers build config 00:01:51.914 net/cnxk: not in enabled drivers build config 00:01:51.914 net/cpfl: not in enabled drivers build config 00:01:51.914 net/cxgbe: not in enabled drivers build config 00:01:51.914 net/dpaa: not in enabled drivers build config 00:01:51.914 net/dpaa2: not in enabled drivers build config 00:01:51.914 net/e1000: not in enabled drivers build config 00:01:51.914 net/ena: not in enabled drivers build config 00:01:51.914 net/enetc: not in enabled drivers build config 00:01:51.914 net/enetfec: not in enabled drivers build config 00:01:51.914 net/enic: not in enabled drivers build config 00:01:51.914 net/failsafe: not in enabled drivers build config 00:01:51.914 net/fm10k: not in enabled drivers build config 00:01:51.914 net/gve: not in enabled drivers build config 00:01:51.914 net/hinic: not in enabled drivers build config 00:01:51.914 net/hns3: not in enabled drivers build config 00:01:51.914 net/i40e: not in enabled drivers build config 00:01:51.914 net/iavf: not in enabled drivers build config 00:01:51.914 net/ice: not in enabled drivers build config 00:01:51.914 net/idpf: not in enabled drivers build config 00:01:51.914 net/igc: not in enabled drivers build config 00:01:51.914 net/ionic: not in enabled drivers build config 00:01:51.914 net/ipn3ke: not in enabled drivers build config 00:01:51.914 net/ixgbe: not in enabled drivers build config 00:01:51.914 net/mana: not in enabled drivers build config 00:01:51.914 net/memif: not in enabled drivers build config 00:01:51.914 net/mlx4: not in enabled drivers build config 00:01:51.914 net/mlx5: not in enabled drivers build config 00:01:51.914 net/mvneta: not in enabled drivers build config 00:01:51.914 net/mvpp2: not in enabled drivers build config 00:01:51.914 net/netvsc: not in enabled drivers build config 00:01:51.914 net/nfb: not in enabled drivers build config 00:01:51.914 net/nfp: not in enabled drivers build config 00:01:51.914 net/ngbe: not in enabled drivers build config 00:01:51.914 net/null: not in enabled drivers build config 00:01:51.914 net/octeontx: not in enabled drivers build config 00:01:51.914 net/octeon_ep: not in enabled drivers build config 00:01:51.914 net/pcap: not in enabled drivers build config 00:01:51.914 net/pfe: not in enabled drivers build config 00:01:51.914 net/qede: not in enabled drivers build config 00:01:51.914 net/ring: not in enabled drivers build config 00:01:51.914 net/sfc: not in enabled drivers build config 00:01:51.914 net/softnic: not in enabled drivers build config 00:01:51.914 net/tap: not in enabled drivers build config 00:01:51.914 net/thunderx: not in enabled drivers build config 00:01:51.914 net/txgbe: not in enabled drivers build config 00:01:51.914 net/vdev_netvsc: not in enabled drivers build config 00:01:51.914 net/vhost: not in enabled drivers build config 00:01:51.914 net/virtio: not in enabled drivers build config 00:01:51.914 net/vmxnet3: not in enabled drivers build config 00:01:51.914 raw/*: missing internal dependency, "rawdev" 00:01:51.914 crypto/armv8: not in enabled drivers build config 00:01:51.914 crypto/bcmfs: not in enabled drivers build config 00:01:51.914 crypto/caam_jr: not in enabled drivers build config 00:01:51.914 crypto/ccp: not in enabled drivers build config 00:01:51.914 crypto/cnxk: not in enabled drivers build config 00:01:51.914 crypto/dpaa_sec: not in enabled drivers build config 00:01:51.914 crypto/dpaa2_sec: not in enabled drivers build config 00:01:51.914 crypto/ipsec_mb: not in enabled drivers build config 00:01:51.914 crypto/mlx5: not in enabled drivers build config 00:01:51.914 crypto/mvsam: not in enabled drivers build config 00:01:51.914 crypto/nitrox: not in enabled drivers build config 00:01:51.914 crypto/null: not in enabled drivers build config 00:01:51.914 crypto/octeontx: not in enabled drivers build config 00:01:51.914 crypto/openssl: not in enabled drivers build config 00:01:51.914 crypto/scheduler: not in enabled drivers build config 00:01:51.914 crypto/uadk: not in enabled drivers build config 00:01:51.914 crypto/virtio: not in enabled drivers build config 00:01:51.914 compress/isal: not in enabled drivers build config 00:01:51.914 compress/mlx5: not in enabled drivers build config 00:01:51.914 compress/octeontx: not in enabled drivers build config 00:01:51.914 compress/zlib: not in enabled drivers build config 00:01:51.914 regex/*: missing internal dependency, "regexdev" 00:01:51.914 ml/*: missing internal dependency, "mldev" 00:01:51.914 vdpa/ifc: not in enabled drivers build config 00:01:51.914 vdpa/mlx5: not in enabled drivers build config 00:01:51.914 vdpa/nfp: not in enabled drivers build config 00:01:51.914 vdpa/sfc: not in enabled drivers build config 00:01:51.914 event/*: missing internal dependency, "eventdev" 00:01:51.914 baseband/*: missing internal dependency, "bbdev" 00:01:51.914 gpu/*: missing internal dependency, "gpudev" 00:01:51.914 00:01:51.914 00:01:51.914 Build targets in project: 84 00:01:51.914 00:01:51.914 DPDK 23.11.0 00:01:51.914 00:01:51.914 User defined options 00:01:51.914 buildtype : debug 00:01:51.914 default_library : shared 00:01:51.914 libdir : lib 00:01:51.914 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:51.914 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:51.914 c_link_args : 00:01:51.914 cpu_instruction_set: native 00:01:51.914 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:51.914 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:51.914 enable_docs : false 00:01:51.914 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:51.914 enable_kmods : false 00:01:51.914 tests : false 00:01:51.914 00:01:51.914 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:52.183 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:52.183 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:52.442 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:52.442 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:52.443 [4/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:52.443 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:52.443 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:52.443 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:52.443 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:52.443 [9/264] Linking static target lib/librte_kvargs.a 00:01:52.443 [10/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:52.443 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:52.443 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:52.443 [13/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:52.443 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:52.443 [15/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:52.443 [16/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:52.443 [17/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:52.443 [18/264] Linking static target lib/librte_log.a 00:01:52.443 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:52.443 [20/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:52.443 [21/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:52.443 [22/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:52.443 [23/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:52.443 [24/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:52.443 [25/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:52.443 [26/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:52.443 [27/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:52.443 [28/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:52.443 [29/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:52.443 [30/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:52.443 [31/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:52.443 [32/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:52.443 [33/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:52.443 [34/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:52.443 [35/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:52.702 [36/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:52.702 [37/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:52.702 [38/264] Linking static target lib/librte_pci.a 00:01:52.702 [39/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:52.702 [40/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:52.702 [41/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:52.702 [42/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:52.702 [43/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:52.702 [44/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:52.702 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:52.702 [46/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.702 [47/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:52.702 [48/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:52.702 [49/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.702 [50/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:52.961 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:52.961 [52/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:52.961 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:52.961 [54/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:52.961 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:52.961 [56/264] Linking static target lib/librte_meter.a 00:01:52.961 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:52.961 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:52.961 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:52.961 [60/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:52.961 [61/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:52.961 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:52.961 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:52.961 [64/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:52.961 [65/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:52.961 [66/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:52.961 [67/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:52.961 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:52.961 [69/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:52.961 [70/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:52.961 [71/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:52.961 [72/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:52.961 [73/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:52.961 [74/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:52.961 [75/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:52.961 [76/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:52.961 [77/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:52.961 [78/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:52.961 [79/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:52.961 [80/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:52.961 [81/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:52.961 [82/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:52.961 [83/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:52.961 [84/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:52.961 [85/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:52.961 [86/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:52.961 [87/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:52.961 [88/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:52.961 [89/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:52.961 [90/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:52.961 [91/264] Linking static target lib/librte_telemetry.a 00:01:52.961 [92/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:52.961 [93/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:52.961 [94/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:52.961 [95/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:52.961 [96/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:52.961 [97/264] Linking static target lib/librte_ring.a 00:01:52.961 [98/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:52.961 [99/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:52.961 [100/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:52.961 [101/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:52.961 [102/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:52.961 [103/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:52.961 [104/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:52.961 [105/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:52.961 [106/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:52.961 [107/264] Linking static target lib/librte_timer.a 00:01:52.961 [108/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:52.961 [109/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:52.961 [110/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:52.961 [111/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:52.961 [112/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:52.961 [113/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:52.961 [114/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:52.961 [115/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:52.961 [116/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:52.961 [117/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:52.961 [118/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:52.961 [119/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:52.961 [120/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:52.961 [121/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:52.961 [122/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:52.961 [123/264] Linking static target lib/librte_mempool.a 00:01:52.961 [124/264] Linking static target lib/librte_rcu.a 00:01:52.961 [125/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:52.961 [126/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:52.961 [127/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:52.961 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:52.961 [129/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:52.961 [130/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:52.961 [131/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:52.961 [132/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:52.961 [133/264] Linking static target lib/librte_power.a 00:01:52.961 [134/264] Linking static target lib/librte_cmdline.a 00:01:52.961 [135/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:52.961 [136/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:52.961 [137/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:52.961 [138/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:52.961 [139/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:52.961 [140/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:52.961 [141/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.962 [142/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:52.962 [143/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:52.962 [144/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:52.962 [145/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:52.962 [146/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:52.962 [147/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:52.962 [148/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:52.962 [149/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:52.962 [150/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:52.962 [151/264] Linking static target lib/librte_net.a 00:01:52.962 [152/264] Linking static target lib/librte_dmadev.a 00:01:52.962 [153/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:52.962 [154/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:52.962 [155/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:53.222 [156/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:53.222 [157/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:53.222 [158/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:53.222 [159/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:53.222 [160/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:53.222 [161/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:53.222 [162/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:53.222 [163/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:53.222 [164/264] Linking static target lib/librte_security.a 00:01:53.222 [165/264] Linking target lib/librte_log.so.24.0 00:01:53.222 [166/264] Linking static target lib/librte_reorder.a 00:01:53.222 [167/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:53.222 [168/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:53.222 [169/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:53.222 [170/264] Linking static target lib/librte_eal.a 00:01:53.222 [171/264] Linking static target lib/librte_compressdev.a 00:01:53.222 [172/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:53.222 [173/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:53.222 [174/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:53.222 [175/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.222 [176/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:53.222 [177/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:53.222 [178/264] Linking static target drivers/librte_bus_vdev.a 00:01:53.222 [179/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:53.222 [180/264] Linking static target lib/librte_mbuf.a 00:01:53.222 [181/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:53.222 [182/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:53.222 [183/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:53.222 [184/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:53.222 [185/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.222 [186/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:53.222 [187/264] Linking target lib/librte_kvargs.so.24.0 00:01:53.222 [188/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:53.222 [189/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:53.222 [190/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:53.222 [191/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:53.222 [192/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:53.222 [193/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:53.483 [194/264] Linking static target lib/librte_hash.a 00:01:53.483 [195/264] Linking static target lib/librte_cryptodev.a 00:01:53.483 [196/264] Linking static target drivers/librte_bus_pci.a 00:01:53.483 [197/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:53.483 [198/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.483 [199/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:53.483 [200/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:53.483 [201/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:53.483 [202/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.483 [203/264] Linking static target drivers/librte_mempool_ring.a 00:01:53.483 [204/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.483 [205/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:53.483 [206/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.483 [207/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:53.483 [208/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.483 [209/264] Linking target lib/librte_telemetry.so.24.0 00:01:53.484 [210/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.744 [211/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.744 [212/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:53.744 [213/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.005 [214/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:54.005 [215/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.005 [216/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:54.005 [217/264] Linking static target lib/librte_ethdev.a 00:01:54.005 [218/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.005 [219/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.005 [220/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.266 [221/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.266 [222/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.527 [223/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.099 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:55.099 [225/264] Linking static target lib/librte_vhost.a 00:01:55.673 [226/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.589 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.179 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.122 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.122 [230/264] Linking target lib/librte_eal.so.24.0 00:02:05.122 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:05.384 [232/264] Linking target lib/librte_ring.so.24.0 00:02:05.384 [233/264] Linking target lib/librte_meter.so.24.0 00:02:05.384 [234/264] Linking target lib/librte_pci.so.24.0 00:02:05.384 [235/264] Linking target lib/librte_timer.so.24.0 00:02:05.384 [236/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:05.384 [237/264] Linking target lib/librte_dmadev.so.24.0 00:02:05.384 [238/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:05.384 [239/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:05.384 [240/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:05.384 [241/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:05.384 [242/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:05.384 [243/264] Linking target lib/librte_rcu.so.24.0 00:02:05.384 [244/264] Linking target lib/librte_mempool.so.24.0 00:02:05.645 [245/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:05.645 [246/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:05.645 [247/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:05.645 [248/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:05.645 [249/264] Linking target lib/librte_mbuf.so.24.0 00:02:05.907 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:05.907 [251/264] Linking target lib/librte_reorder.so.24.0 00:02:05.907 [252/264] Linking target lib/librte_compressdev.so.24.0 00:02:05.907 [253/264] Linking target lib/librte_net.so.24.0 00:02:05.907 [254/264] Linking target lib/librte_cryptodev.so.24.0 00:02:06.168 [255/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:06.168 [256/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:06.168 [257/264] Linking target lib/librte_hash.so.24.0 00:02:06.168 [258/264] Linking target lib/librte_cmdline.so.24.0 00:02:06.168 [259/264] Linking target lib/librte_security.so.24.0 00:02:06.168 [260/264] Linking target lib/librte_ethdev.so.24.0 00:02:06.429 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:06.429 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:06.429 [263/264] Linking target lib/librte_power.so.24.0 00:02:06.429 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:06.429 INFO: autodetecting backend as ninja 00:02:06.429 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:07.814 CC lib/log/log.o 00:02:07.814 CC lib/log/log_flags.o 00:02:07.814 CC lib/log/log_deprecated.o 00:02:07.814 CC lib/ut_mock/mock.o 00:02:07.814 CC lib/ut/ut.o 00:02:07.814 LIB libspdk_ut_mock.a 00:02:07.814 LIB libspdk_log.a 00:02:07.814 SO libspdk_ut_mock.so.6.0 00:02:07.814 SO libspdk_log.so.7.0 00:02:07.814 LIB libspdk_ut.a 00:02:07.814 SO libspdk_ut.so.2.0 00:02:07.814 SYMLINK libspdk_ut_mock.so 00:02:07.814 SYMLINK libspdk_log.so 00:02:07.814 SYMLINK libspdk_ut.so 00:02:08.387 CC lib/dma/dma.o 00:02:08.387 CXX lib/trace_parser/trace.o 00:02:08.387 CC lib/util/base64.o 00:02:08.387 CC lib/util/bit_array.o 00:02:08.387 CC lib/ioat/ioat.o 00:02:08.387 CC lib/util/cpuset.o 00:02:08.387 CC lib/util/crc16.o 00:02:08.387 CC lib/util/crc32.o 00:02:08.387 CC lib/util/crc32c.o 00:02:08.387 CC lib/util/crc32_ieee.o 00:02:08.387 CC lib/util/crc64.o 00:02:08.387 CC lib/util/dif.o 00:02:08.387 CC lib/util/fd.o 00:02:08.387 CC lib/util/file.o 00:02:08.387 CC lib/util/hexlify.o 00:02:08.387 CC lib/util/iov.o 00:02:08.387 CC lib/util/math.o 00:02:08.387 CC lib/util/pipe.o 00:02:08.387 CC lib/util/strerror_tls.o 00:02:08.387 CC lib/util/string.o 00:02:08.387 CC lib/util/uuid.o 00:02:08.387 CC lib/util/fd_group.o 00:02:08.387 CC lib/util/xor.o 00:02:08.387 CC lib/util/zipf.o 00:02:08.387 CC lib/vfio_user/host/vfio_user.o 00:02:08.387 CC lib/vfio_user/host/vfio_user_pci.o 00:02:08.387 LIB libspdk_dma.a 00:02:08.647 SO libspdk_dma.so.4.0 00:02:08.647 LIB libspdk_ioat.a 00:02:08.647 SYMLINK libspdk_dma.so 00:02:08.647 SO libspdk_ioat.so.7.0 00:02:08.647 SYMLINK libspdk_ioat.so 00:02:08.647 LIB libspdk_vfio_user.a 00:02:08.647 SO libspdk_vfio_user.so.5.0 00:02:08.971 LIB libspdk_util.a 00:02:08.971 SYMLINK libspdk_vfio_user.so 00:02:08.971 SO libspdk_util.so.9.0 00:02:08.971 SYMLINK libspdk_util.so 00:02:09.259 LIB libspdk_trace_parser.a 00:02:09.259 SO libspdk_trace_parser.so.5.0 00:02:09.259 SYMLINK libspdk_trace_parser.so 00:02:09.259 CC lib/json/json_parse.o 00:02:09.259 CC lib/conf/conf.o 00:02:09.259 CC lib/env_dpdk/env.o 00:02:09.259 CC lib/json/json_util.o 00:02:09.259 CC lib/json/json_write.o 00:02:09.259 CC lib/rdma/common.o 00:02:09.259 CC lib/vmd/vmd.o 00:02:09.259 CC lib/env_dpdk/memory.o 00:02:09.259 CC lib/rdma/rdma_verbs.o 00:02:09.259 CC lib/vmd/led.o 00:02:09.259 CC lib/env_dpdk/pci.o 00:02:09.259 CC lib/env_dpdk/init.o 00:02:09.259 CC lib/env_dpdk/threads.o 00:02:09.521 CC lib/env_dpdk/pci_ioat.o 00:02:09.521 CC lib/env_dpdk/pci_virtio.o 00:02:09.521 CC lib/env_dpdk/pci_vmd.o 00:02:09.521 CC lib/idxd/idxd.o 00:02:09.521 CC lib/env_dpdk/pci_idxd.o 00:02:09.521 CC lib/idxd/idxd_user.o 00:02:09.521 CC lib/env_dpdk/pci_event.o 00:02:09.521 CC lib/env_dpdk/sigbus_handler.o 00:02:09.521 CC lib/env_dpdk/pci_dpdk.o 00:02:09.521 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:09.521 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:09.521 LIB libspdk_conf.a 00:02:09.782 SO libspdk_conf.so.6.0 00:02:09.782 LIB libspdk_json.a 00:02:09.782 LIB libspdk_rdma.a 00:02:09.782 SO libspdk_json.so.6.0 00:02:09.782 SYMLINK libspdk_conf.so 00:02:09.782 SO libspdk_rdma.so.6.0 00:02:09.782 SYMLINK libspdk_json.so 00:02:09.782 SYMLINK libspdk_rdma.so 00:02:10.044 LIB libspdk_idxd.a 00:02:10.044 SO libspdk_idxd.so.12.0 00:02:10.044 LIB libspdk_vmd.a 00:02:10.044 SYMLINK libspdk_idxd.so 00:02:10.044 SO libspdk_vmd.so.6.0 00:02:10.044 SYMLINK libspdk_vmd.so 00:02:10.306 CC lib/jsonrpc/jsonrpc_server.o 00:02:10.306 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:10.306 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:10.306 CC lib/jsonrpc/jsonrpc_client.o 00:02:10.306 LIB libspdk_jsonrpc.a 00:02:10.567 SO libspdk_jsonrpc.so.6.0 00:02:10.567 SYMLINK libspdk_jsonrpc.so 00:02:10.567 LIB libspdk_env_dpdk.a 00:02:10.567 SO libspdk_env_dpdk.so.14.0 00:02:10.827 SYMLINK libspdk_env_dpdk.so 00:02:10.827 CC lib/rpc/rpc.o 00:02:11.088 LIB libspdk_rpc.a 00:02:11.088 SO libspdk_rpc.so.6.0 00:02:11.349 SYMLINK libspdk_rpc.so 00:02:11.610 CC lib/keyring/keyring.o 00:02:11.610 CC lib/keyring/keyring_rpc.o 00:02:11.610 CC lib/trace/trace.o 00:02:11.610 CC lib/trace/trace_flags.o 00:02:11.610 CC lib/trace/trace_rpc.o 00:02:11.610 CC lib/notify/notify.o 00:02:11.610 CC lib/notify/notify_rpc.o 00:02:11.872 LIB libspdk_notify.a 00:02:11.872 SO libspdk_notify.so.6.0 00:02:11.872 LIB libspdk_keyring.a 00:02:11.872 LIB libspdk_trace.a 00:02:11.872 SO libspdk_keyring.so.1.0 00:02:11.872 SO libspdk_trace.so.10.0 00:02:11.872 SYMLINK libspdk_notify.so 00:02:11.872 SYMLINK libspdk_keyring.so 00:02:11.872 SYMLINK libspdk_trace.so 00:02:12.443 CC lib/sock/sock.o 00:02:12.443 CC lib/sock/sock_rpc.o 00:02:12.443 CC lib/thread/thread.o 00:02:12.443 CC lib/thread/iobuf.o 00:02:12.704 LIB libspdk_sock.a 00:02:12.704 SO libspdk_sock.so.9.0 00:02:12.704 SYMLINK libspdk_sock.so 00:02:13.275 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:13.275 CC lib/nvme/nvme_ctrlr.o 00:02:13.275 CC lib/nvme/nvme_ns_cmd.o 00:02:13.275 CC lib/nvme/nvme_fabric.o 00:02:13.275 CC lib/nvme/nvme_ns.o 00:02:13.275 CC lib/nvme/nvme_pcie_common.o 00:02:13.275 CC lib/nvme/nvme_pcie.o 00:02:13.275 CC lib/nvme/nvme_qpair.o 00:02:13.275 CC lib/nvme/nvme.o 00:02:13.275 CC lib/nvme/nvme_quirks.o 00:02:13.275 CC lib/nvme/nvme_transport.o 00:02:13.275 CC lib/nvme/nvme_discovery.o 00:02:13.275 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:13.276 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:13.276 CC lib/nvme/nvme_tcp.o 00:02:13.276 CC lib/nvme/nvme_opal.o 00:02:13.276 CC lib/nvme/nvme_io_msg.o 00:02:13.276 CC lib/nvme/nvme_poll_group.o 00:02:13.276 CC lib/nvme/nvme_zns.o 00:02:13.276 CC lib/nvme/nvme_stubs.o 00:02:13.276 CC lib/nvme/nvme_auth.o 00:02:13.276 CC lib/nvme/nvme_cuse.o 00:02:13.276 CC lib/nvme/nvme_vfio_user.o 00:02:13.276 CC lib/nvme/nvme_rdma.o 00:02:13.536 LIB libspdk_thread.a 00:02:13.536 SO libspdk_thread.so.10.0 00:02:13.796 SYMLINK libspdk_thread.so 00:02:14.057 CC lib/accel/accel.o 00:02:14.057 CC lib/accel/accel_rpc.o 00:02:14.057 CC lib/accel/accel_sw.o 00:02:14.057 CC lib/init/json_config.o 00:02:14.057 CC lib/init/subsystem.o 00:02:14.057 CC lib/init/subsystem_rpc.o 00:02:14.057 CC lib/init/rpc.o 00:02:14.057 CC lib/blob/blobstore.o 00:02:14.057 CC lib/vfu_tgt/tgt_endpoint.o 00:02:14.057 CC lib/vfu_tgt/tgt_rpc.o 00:02:14.057 CC lib/virtio/virtio.o 00:02:14.057 CC lib/blob/request.o 00:02:14.057 CC lib/blob/zeroes.o 00:02:14.057 CC lib/virtio/virtio_vhost_user.o 00:02:14.057 CC lib/virtio/virtio_vfio_user.o 00:02:14.057 CC lib/blob/blob_bs_dev.o 00:02:14.057 CC lib/virtio/virtio_pci.o 00:02:14.318 LIB libspdk_init.a 00:02:14.318 SO libspdk_init.so.5.0 00:02:14.318 LIB libspdk_vfu_tgt.a 00:02:14.318 LIB libspdk_virtio.a 00:02:14.318 SO libspdk_vfu_tgt.so.3.0 00:02:14.318 SYMLINK libspdk_init.so 00:02:14.318 SO libspdk_virtio.so.7.0 00:02:14.318 SYMLINK libspdk_vfu_tgt.so 00:02:14.318 SYMLINK libspdk_virtio.so 00:02:14.579 CC lib/event/app.o 00:02:14.579 CC lib/event/reactor.o 00:02:14.579 CC lib/event/log_rpc.o 00:02:14.579 CC lib/event/app_rpc.o 00:02:14.579 CC lib/event/scheduler_static.o 00:02:14.839 LIB libspdk_accel.a 00:02:14.839 SO libspdk_accel.so.15.0 00:02:14.839 LIB libspdk_nvme.a 00:02:14.839 SYMLINK libspdk_accel.so 00:02:14.839 SO libspdk_nvme.so.13.0 00:02:15.100 LIB libspdk_event.a 00:02:15.100 SO libspdk_event.so.13.0 00:02:15.100 SYMLINK libspdk_event.so 00:02:15.100 CC lib/bdev/bdev.o 00:02:15.100 CC lib/bdev/bdev_rpc.o 00:02:15.100 CC lib/bdev/bdev_zone.o 00:02:15.100 CC lib/bdev/part.o 00:02:15.100 CC lib/bdev/scsi_nvme.o 00:02:15.361 SYMLINK libspdk_nvme.so 00:02:16.300 LIB libspdk_blob.a 00:02:16.300 SO libspdk_blob.so.11.0 00:02:16.561 SYMLINK libspdk_blob.so 00:02:16.822 CC lib/lvol/lvol.o 00:02:16.822 CC lib/blobfs/blobfs.o 00:02:16.822 CC lib/blobfs/tree.o 00:02:17.394 LIB libspdk_bdev.a 00:02:17.394 SO libspdk_bdev.so.15.0 00:02:17.654 SYMLINK libspdk_bdev.so 00:02:17.654 LIB libspdk_blobfs.a 00:02:17.654 SO libspdk_blobfs.so.10.0 00:02:17.654 LIB libspdk_lvol.a 00:02:17.654 SO libspdk_lvol.so.10.0 00:02:17.654 SYMLINK libspdk_blobfs.so 00:02:17.654 SYMLINK libspdk_lvol.so 00:02:17.915 CC lib/ublk/ublk.o 00:02:17.915 CC lib/nvmf/ctrlr.o 00:02:17.915 CC lib/ublk/ublk_rpc.o 00:02:17.915 CC lib/nvmf/ctrlr_discovery.o 00:02:17.915 CC lib/scsi/dev.o 00:02:17.915 CC lib/nbd/nbd.o 00:02:17.915 CC lib/scsi/lun.o 00:02:17.915 CC lib/nbd/nbd_rpc.o 00:02:17.915 CC lib/nvmf/ctrlr_bdev.o 00:02:17.915 CC lib/scsi/port.o 00:02:17.915 CC lib/nvmf/subsystem.o 00:02:17.915 CC lib/scsi/scsi.o 00:02:17.915 CC lib/ftl/ftl_core.o 00:02:17.915 CC lib/ftl/ftl_layout.o 00:02:17.915 CC lib/nvmf/nvmf.o 00:02:17.915 CC lib/ftl/ftl_init.o 00:02:17.915 CC lib/scsi/scsi_bdev.o 00:02:17.915 CC lib/nvmf/nvmf_rpc.o 00:02:17.915 CC lib/scsi/scsi_pr.o 00:02:17.915 CC lib/nvmf/transport.o 00:02:17.915 CC lib/ftl/ftl_debug.o 00:02:17.915 CC lib/scsi/scsi_rpc.o 00:02:17.915 CC lib/ftl/ftl_io.o 00:02:17.915 CC lib/nvmf/tcp.o 00:02:17.915 CC lib/scsi/task.o 00:02:17.915 CC lib/ftl/ftl_sb.o 00:02:17.915 CC lib/nvmf/stubs.o 00:02:17.915 CC lib/ftl/ftl_l2p.o 00:02:17.915 CC lib/nvmf/mdns_server.o 00:02:17.915 CC lib/ftl/ftl_l2p_flat.o 00:02:17.915 CC lib/nvmf/vfio_user.o 00:02:17.915 CC lib/ftl/ftl_nv_cache.o 00:02:17.915 CC lib/nvmf/rdma.o 00:02:17.915 CC lib/ftl/ftl_band.o 00:02:17.915 CC lib/nvmf/auth.o 00:02:17.915 CC lib/ftl/ftl_band_ops.o 00:02:17.915 CC lib/ftl/ftl_writer.o 00:02:17.915 CC lib/ftl/ftl_rq.o 00:02:17.915 CC lib/ftl/ftl_reloc.o 00:02:17.915 CC lib/ftl/ftl_l2p_cache.o 00:02:17.915 CC lib/ftl/ftl_p2l.o 00:02:17.915 CC lib/ftl/mngt/ftl_mngt.o 00:02:17.915 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:17.915 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:17.915 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:17.915 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:17.915 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:17.915 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:17.915 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:17.915 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:17.915 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:17.915 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:17.915 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:17.915 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:17.915 CC lib/ftl/utils/ftl_bitmap.o 00:02:17.915 CC lib/ftl/utils/ftl_conf.o 00:02:17.915 CC lib/ftl/utils/ftl_md.o 00:02:17.915 CC lib/ftl/utils/ftl_mempool.o 00:02:17.915 CC lib/ftl/utils/ftl_property.o 00:02:17.915 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:17.915 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:17.915 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:17.915 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:17.915 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:17.915 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:17.915 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:17.915 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:17.915 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:17.915 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:17.915 CC lib/ftl/base/ftl_base_dev.o 00:02:17.915 CC lib/ftl/ftl_trace.o 00:02:17.915 CC lib/ftl/base/ftl_base_bdev.o 00:02:18.482 LIB libspdk_nbd.a 00:02:18.482 SO libspdk_nbd.so.7.0 00:02:18.482 LIB libspdk_scsi.a 00:02:18.482 SYMLINK libspdk_nbd.so 00:02:18.482 SO libspdk_scsi.so.9.0 00:02:18.482 LIB libspdk_ublk.a 00:02:18.482 SYMLINK libspdk_scsi.so 00:02:18.741 SO libspdk_ublk.so.3.0 00:02:18.741 SYMLINK libspdk_ublk.so 00:02:18.741 LIB libspdk_ftl.a 00:02:19.000 CC lib/iscsi/conn.o 00:02:19.000 CC lib/iscsi/iscsi.o 00:02:19.000 CC lib/iscsi/init_grp.o 00:02:19.000 CC lib/iscsi/md5.o 00:02:19.000 CC lib/vhost/vhost.o 00:02:19.000 CC lib/iscsi/param.o 00:02:19.000 CC lib/vhost/vhost_rpc.o 00:02:19.000 CC lib/iscsi/portal_grp.o 00:02:19.000 CC lib/vhost/vhost_scsi.o 00:02:19.001 CC lib/vhost/vhost_blk.o 00:02:19.001 CC lib/iscsi/tgt_node.o 00:02:19.001 CC lib/vhost/rte_vhost_user.o 00:02:19.001 CC lib/iscsi/iscsi_subsystem.o 00:02:19.001 CC lib/iscsi/iscsi_rpc.o 00:02:19.001 CC lib/iscsi/task.o 00:02:19.001 SO libspdk_ftl.so.9.0 00:02:19.261 SYMLINK libspdk_ftl.so 00:02:19.833 LIB libspdk_nvmf.a 00:02:19.833 SO libspdk_nvmf.so.18.0 00:02:19.833 LIB libspdk_vhost.a 00:02:19.833 SO libspdk_vhost.so.8.0 00:02:20.095 SYMLINK libspdk_vhost.so 00:02:20.095 SYMLINK libspdk_nvmf.so 00:02:20.095 LIB libspdk_iscsi.a 00:02:20.095 SO libspdk_iscsi.so.8.0 00:02:20.357 SYMLINK libspdk_iscsi.so 00:02:20.930 CC module/env_dpdk/env_dpdk_rpc.o 00:02:20.930 CC module/vfu_device/vfu_virtio.o 00:02:20.930 CC module/vfu_device/vfu_virtio_blk.o 00:02:20.930 CC module/vfu_device/vfu_virtio_scsi.o 00:02:20.930 CC module/vfu_device/vfu_virtio_rpc.o 00:02:20.930 LIB libspdk_env_dpdk_rpc.a 00:02:21.190 SO libspdk_env_dpdk_rpc.so.6.0 00:02:21.190 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:21.190 CC module/blob/bdev/blob_bdev.o 00:02:21.190 CC module/accel/error/accel_error.o 00:02:21.190 CC module/accel/error/accel_error_rpc.o 00:02:21.190 CC module/keyring/file/keyring.o 00:02:21.190 CC module/keyring/file/keyring_rpc.o 00:02:21.190 CC module/sock/posix/posix.o 00:02:21.190 CC module/accel/dsa/accel_dsa.o 00:02:21.190 CC module/accel/ioat/accel_ioat.o 00:02:21.190 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:21.190 CC module/accel/ioat/accel_ioat_rpc.o 00:02:21.190 CC module/scheduler/gscheduler/gscheduler.o 00:02:21.190 CC module/accel/dsa/accel_dsa_rpc.o 00:02:21.190 CC module/accel/iaa/accel_iaa.o 00:02:21.190 CC module/accel/iaa/accel_iaa_rpc.o 00:02:21.190 SYMLINK libspdk_env_dpdk_rpc.so 00:02:21.190 LIB libspdk_scheduler_gscheduler.a 00:02:21.190 LIB libspdk_scheduler_dpdk_governor.a 00:02:21.190 LIB libspdk_keyring_file.a 00:02:21.190 LIB libspdk_accel_error.a 00:02:21.190 SO libspdk_scheduler_gscheduler.so.4.0 00:02:21.190 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:21.190 LIB libspdk_accel_ioat.a 00:02:21.451 LIB libspdk_scheduler_dynamic.a 00:02:21.451 SO libspdk_keyring_file.so.1.0 00:02:21.451 SO libspdk_accel_error.so.2.0 00:02:21.451 LIB libspdk_accel_iaa.a 00:02:21.451 SO libspdk_accel_ioat.so.6.0 00:02:21.451 LIB libspdk_accel_dsa.a 00:02:21.451 SO libspdk_scheduler_dynamic.so.4.0 00:02:21.451 LIB libspdk_blob_bdev.a 00:02:21.451 SYMLINK libspdk_scheduler_gscheduler.so 00:02:21.451 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:21.451 SO libspdk_accel_iaa.so.3.0 00:02:21.451 SYMLINK libspdk_accel_error.so 00:02:21.451 SYMLINK libspdk_keyring_file.so 00:02:21.451 SO libspdk_blob_bdev.so.11.0 00:02:21.451 SO libspdk_accel_dsa.so.5.0 00:02:21.451 SYMLINK libspdk_accel_ioat.so 00:02:21.451 SYMLINK libspdk_scheduler_dynamic.so 00:02:21.451 SYMLINK libspdk_accel_iaa.so 00:02:21.451 SYMLINK libspdk_blob_bdev.so 00:02:21.451 SYMLINK libspdk_accel_dsa.so 00:02:21.451 LIB libspdk_vfu_device.a 00:02:21.451 SO libspdk_vfu_device.so.3.0 00:02:21.712 SYMLINK libspdk_vfu_device.so 00:02:21.712 LIB libspdk_sock_posix.a 00:02:21.712 SO libspdk_sock_posix.so.6.0 00:02:21.973 SYMLINK libspdk_sock_posix.so 00:02:21.973 CC module/blobfs/bdev/blobfs_bdev.o 00:02:21.973 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:21.973 CC module/bdev/delay/vbdev_delay.o 00:02:21.973 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:21.973 CC module/bdev/passthru/vbdev_passthru.o 00:02:21.973 CC module/bdev/null/bdev_null.o 00:02:21.973 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:21.973 CC module/bdev/null/bdev_null_rpc.o 00:02:21.973 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:21.973 CC module/bdev/error/vbdev_error.o 00:02:21.973 CC module/bdev/nvme/bdev_nvme.o 00:02:21.973 CC module/bdev/error/vbdev_error_rpc.o 00:02:21.973 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:21.973 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:21.973 CC module/bdev/ftl/bdev_ftl.o 00:02:21.973 CC module/bdev/nvme/nvme_rpc.o 00:02:21.973 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:21.973 CC module/bdev/nvme/bdev_mdns_client.o 00:02:21.973 CC module/bdev/lvol/vbdev_lvol.o 00:02:21.973 CC module/bdev/nvme/vbdev_opal.o 00:02:21.973 CC module/bdev/gpt/gpt.o 00:02:21.973 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:21.973 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:21.973 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:21.973 CC module/bdev/gpt/vbdev_gpt.o 00:02:21.973 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:21.973 CC module/bdev/split/vbdev_split.o 00:02:21.973 CC module/bdev/malloc/bdev_malloc.o 00:02:21.973 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:21.973 CC module/bdev/split/vbdev_split_rpc.o 00:02:21.973 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:21.973 CC module/bdev/raid/bdev_raid.o 00:02:21.973 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:21.973 CC module/bdev/iscsi/bdev_iscsi.o 00:02:21.973 CC module/bdev/raid/bdev_raid_sb.o 00:02:21.973 CC module/bdev/raid/bdev_raid_rpc.o 00:02:21.973 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:21.973 CC module/bdev/raid/raid0.o 00:02:21.973 CC module/bdev/aio/bdev_aio.o 00:02:21.973 CC module/bdev/aio/bdev_aio_rpc.o 00:02:21.973 CC module/bdev/raid/raid1.o 00:02:21.973 CC module/bdev/raid/concat.o 00:02:22.233 LIB libspdk_blobfs_bdev.a 00:02:22.233 SO libspdk_blobfs_bdev.so.6.0 00:02:22.233 LIB libspdk_bdev_null.a 00:02:22.233 LIB libspdk_bdev_split.a 00:02:22.233 LIB libspdk_bdev_zone_block.a 00:02:22.233 LIB libspdk_bdev_gpt.a 00:02:22.233 SO libspdk_bdev_split.so.6.0 00:02:22.233 SO libspdk_bdev_null.so.6.0 00:02:22.234 LIB libspdk_bdev_error.a 00:02:22.234 SO libspdk_bdev_zone_block.so.6.0 00:02:22.234 SYMLINK libspdk_blobfs_bdev.so 00:02:22.234 LIB libspdk_bdev_passthru.a 00:02:22.494 LIB libspdk_bdev_ftl.a 00:02:22.494 SO libspdk_bdev_error.so.6.0 00:02:22.494 SO libspdk_bdev_gpt.so.6.0 00:02:22.494 SO libspdk_bdev_passthru.so.6.0 00:02:22.494 SYMLINK libspdk_bdev_zone_block.so 00:02:22.494 SYMLINK libspdk_bdev_null.so 00:02:22.494 LIB libspdk_bdev_aio.a 00:02:22.494 LIB libspdk_bdev_delay.a 00:02:22.494 SYMLINK libspdk_bdev_split.so 00:02:22.494 SO libspdk_bdev_ftl.so.6.0 00:02:22.494 LIB libspdk_bdev_iscsi.a 00:02:22.494 SO libspdk_bdev_aio.so.6.0 00:02:22.494 SYMLINK libspdk_bdev_error.so 00:02:22.495 SYMLINK libspdk_bdev_gpt.so 00:02:22.495 LIB libspdk_bdev_malloc.a 00:02:22.495 SO libspdk_bdev_delay.so.6.0 00:02:22.495 SO libspdk_bdev_iscsi.so.6.0 00:02:22.495 SYMLINK libspdk_bdev_passthru.so 00:02:22.495 SYMLINK libspdk_bdev_ftl.so 00:02:22.495 SO libspdk_bdev_malloc.so.6.0 00:02:22.495 SYMLINK libspdk_bdev_aio.so 00:02:22.495 SYMLINK libspdk_bdev_delay.so 00:02:22.495 LIB libspdk_bdev_lvol.a 00:02:22.495 SYMLINK libspdk_bdev_iscsi.so 00:02:22.495 LIB libspdk_bdev_virtio.a 00:02:22.495 SYMLINK libspdk_bdev_malloc.so 00:02:22.495 SO libspdk_bdev_lvol.so.6.0 00:02:22.495 SO libspdk_bdev_virtio.so.6.0 00:02:22.755 SYMLINK libspdk_bdev_virtio.so 00:02:22.755 SYMLINK libspdk_bdev_lvol.so 00:02:23.017 LIB libspdk_bdev_raid.a 00:02:23.017 SO libspdk_bdev_raid.so.6.0 00:02:23.017 SYMLINK libspdk_bdev_raid.so 00:02:23.960 LIB libspdk_bdev_nvme.a 00:02:23.960 SO libspdk_bdev_nvme.so.7.0 00:02:24.222 SYMLINK libspdk_bdev_nvme.so 00:02:24.794 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:24.794 CC module/event/subsystems/keyring/keyring.o 00:02:24.794 CC module/event/subsystems/iobuf/iobuf.o 00:02:24.794 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:24.794 CC module/event/subsystems/sock/sock.o 00:02:24.794 CC module/event/subsystems/vmd/vmd.o 00:02:24.794 CC module/event/subsystems/scheduler/scheduler.o 00:02:24.794 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:24.794 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:25.056 LIB libspdk_event_sock.a 00:02:25.056 LIB libspdk_event_keyring.a 00:02:25.056 LIB libspdk_event_vfu_tgt.a 00:02:25.056 LIB libspdk_event_vhost_blk.a 00:02:25.056 LIB libspdk_event_scheduler.a 00:02:25.056 LIB libspdk_event_vmd.a 00:02:25.056 LIB libspdk_event_iobuf.a 00:02:25.056 SO libspdk_event_sock.so.5.0 00:02:25.056 SO libspdk_event_keyring.so.1.0 00:02:25.056 SO libspdk_event_vfu_tgt.so.3.0 00:02:25.056 SO libspdk_event_vhost_blk.so.3.0 00:02:25.056 SO libspdk_event_scheduler.so.4.0 00:02:25.056 SO libspdk_event_vmd.so.6.0 00:02:25.056 SO libspdk_event_iobuf.so.3.0 00:02:25.056 SYMLINK libspdk_event_keyring.so 00:02:25.056 SYMLINK libspdk_event_sock.so 00:02:25.056 SYMLINK libspdk_event_vfu_tgt.so 00:02:25.056 SYMLINK libspdk_event_vhost_blk.so 00:02:25.056 SYMLINK libspdk_event_scheduler.so 00:02:25.056 SYMLINK libspdk_event_vmd.so 00:02:25.056 SYMLINK libspdk_event_iobuf.so 00:02:25.629 CC module/event/subsystems/accel/accel.o 00:02:25.629 LIB libspdk_event_accel.a 00:02:25.629 SO libspdk_event_accel.so.6.0 00:02:25.629 SYMLINK libspdk_event_accel.so 00:02:26.201 CC module/event/subsystems/bdev/bdev.o 00:02:26.201 LIB libspdk_event_bdev.a 00:02:26.201 SO libspdk_event_bdev.so.6.0 00:02:26.463 SYMLINK libspdk_event_bdev.so 00:02:26.725 CC module/event/subsystems/nbd/nbd.o 00:02:26.725 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:26.725 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:26.725 CC module/event/subsystems/ublk/ublk.o 00:02:26.725 CC module/event/subsystems/scsi/scsi.o 00:02:26.725 LIB libspdk_event_nbd.a 00:02:26.985 LIB libspdk_event_ublk.a 00:02:26.985 LIB libspdk_event_scsi.a 00:02:26.985 SO libspdk_event_nbd.so.6.0 00:02:26.985 SO libspdk_event_ublk.so.3.0 00:02:26.985 SO libspdk_event_scsi.so.6.0 00:02:26.985 LIB libspdk_event_nvmf.a 00:02:26.985 SYMLINK libspdk_event_nbd.so 00:02:26.985 SO libspdk_event_nvmf.so.6.0 00:02:26.985 SYMLINK libspdk_event_ublk.so 00:02:26.985 SYMLINK libspdk_event_scsi.so 00:02:26.985 SYMLINK libspdk_event_nvmf.so 00:02:27.246 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:27.246 CC module/event/subsystems/iscsi/iscsi.o 00:02:27.507 LIB libspdk_event_vhost_scsi.a 00:02:27.507 SO libspdk_event_vhost_scsi.so.3.0 00:02:27.507 LIB libspdk_event_iscsi.a 00:02:27.507 SO libspdk_event_iscsi.so.6.0 00:02:27.507 SYMLINK libspdk_event_vhost_scsi.so 00:02:27.768 SYMLINK libspdk_event_iscsi.so 00:02:27.768 SO libspdk.so.6.0 00:02:27.768 SYMLINK libspdk.so 00:02:28.337 CC app/trace_record/trace_record.o 00:02:28.337 CC app/spdk_nvme_perf/perf.o 00:02:28.337 CXX app/trace/trace.o 00:02:28.337 CC app/spdk_nvme_identify/identify.o 00:02:28.337 CC app/spdk_nvme_discover/discovery_aer.o 00:02:28.337 CC app/spdk_lspci/spdk_lspci.o 00:02:28.337 CC test/rpc_client/rpc_client_test.o 00:02:28.337 TEST_HEADER include/spdk/accel.h 00:02:28.337 TEST_HEADER include/spdk/accel_module.h 00:02:28.337 TEST_HEADER include/spdk/assert.h 00:02:28.337 CC app/spdk_top/spdk_top.o 00:02:28.337 TEST_HEADER include/spdk/base64.h 00:02:28.337 TEST_HEADER include/spdk/barrier.h 00:02:28.337 TEST_HEADER include/spdk/bdev.h 00:02:28.337 TEST_HEADER include/spdk/bdev_module.h 00:02:28.337 TEST_HEADER include/spdk/bit_array.h 00:02:28.337 TEST_HEADER include/spdk/bdev_zone.h 00:02:28.337 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:28.337 TEST_HEADER include/spdk/blob_bdev.h 00:02:28.337 TEST_HEADER include/spdk/bit_pool.h 00:02:28.337 TEST_HEADER include/spdk/blob.h 00:02:28.337 TEST_HEADER include/spdk/blobfs.h 00:02:28.337 TEST_HEADER include/spdk/conf.h 00:02:28.337 CC app/iscsi_tgt/iscsi_tgt.o 00:02:28.337 TEST_HEADER include/spdk/config.h 00:02:28.337 TEST_HEADER include/spdk/cpuset.h 00:02:28.337 TEST_HEADER include/spdk/crc16.h 00:02:28.337 CC app/spdk_dd/spdk_dd.o 00:02:28.337 TEST_HEADER include/spdk/dif.h 00:02:28.337 TEST_HEADER include/spdk/crc32.h 00:02:28.337 TEST_HEADER include/spdk/crc64.h 00:02:28.337 TEST_HEADER include/spdk/endian.h 00:02:28.337 TEST_HEADER include/spdk/dma.h 00:02:28.337 TEST_HEADER include/spdk/env.h 00:02:28.337 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:28.337 TEST_HEADER include/spdk/fd_group.h 00:02:28.337 TEST_HEADER include/spdk/env_dpdk.h 00:02:28.337 TEST_HEADER include/spdk/event.h 00:02:28.337 CC app/vhost/vhost.o 00:02:28.337 TEST_HEADER include/spdk/file.h 00:02:28.337 TEST_HEADER include/spdk/fd.h 00:02:28.337 TEST_HEADER include/spdk/ftl.h 00:02:28.337 TEST_HEADER include/spdk/gpt_spec.h 00:02:28.337 CC app/nvmf_tgt/nvmf_main.o 00:02:28.337 TEST_HEADER include/spdk/histogram_data.h 00:02:28.337 TEST_HEADER include/spdk/hexlify.h 00:02:28.337 TEST_HEADER include/spdk/idxd_spec.h 00:02:28.337 TEST_HEADER include/spdk/init.h 00:02:28.337 TEST_HEADER include/spdk/idxd.h 00:02:28.337 TEST_HEADER include/spdk/ioat_spec.h 00:02:28.337 TEST_HEADER include/spdk/ioat.h 00:02:28.337 TEST_HEADER include/spdk/iscsi_spec.h 00:02:28.337 TEST_HEADER include/spdk/json.h 00:02:28.337 TEST_HEADER include/spdk/keyring.h 00:02:28.337 TEST_HEADER include/spdk/jsonrpc.h 00:02:28.337 TEST_HEADER include/spdk/keyring_module.h 00:02:28.337 TEST_HEADER include/spdk/likely.h 00:02:28.337 TEST_HEADER include/spdk/log.h 00:02:28.337 TEST_HEADER include/spdk/memory.h 00:02:28.337 TEST_HEADER include/spdk/lvol.h 00:02:28.337 TEST_HEADER include/spdk/mmio.h 00:02:28.337 TEST_HEADER include/spdk/nbd.h 00:02:28.337 TEST_HEADER include/spdk/notify.h 00:02:28.337 TEST_HEADER include/spdk/nvme.h 00:02:28.337 TEST_HEADER include/spdk/nvme_intel.h 00:02:28.337 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:28.337 CC app/spdk_tgt/spdk_tgt.o 00:02:28.337 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:28.337 TEST_HEADER include/spdk/nvme_spec.h 00:02:28.337 TEST_HEADER include/spdk/nvme_zns.h 00:02:28.337 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:28.337 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:28.337 TEST_HEADER include/spdk/nvmf.h 00:02:28.337 TEST_HEADER include/spdk/nvmf_transport.h 00:02:28.337 TEST_HEADER include/spdk/nvmf_spec.h 00:02:28.337 TEST_HEADER include/spdk/opal.h 00:02:28.337 TEST_HEADER include/spdk/opal_spec.h 00:02:28.337 TEST_HEADER include/spdk/pci_ids.h 00:02:28.337 TEST_HEADER include/spdk/queue.h 00:02:28.337 TEST_HEADER include/spdk/reduce.h 00:02:28.337 TEST_HEADER include/spdk/pipe.h 00:02:28.337 TEST_HEADER include/spdk/rpc.h 00:02:28.337 TEST_HEADER include/spdk/scsi.h 00:02:28.337 TEST_HEADER include/spdk/scheduler.h 00:02:28.337 TEST_HEADER include/spdk/scsi_spec.h 00:02:28.337 TEST_HEADER include/spdk/sock.h 00:02:28.337 TEST_HEADER include/spdk/stdinc.h 00:02:28.337 TEST_HEADER include/spdk/string.h 00:02:28.337 TEST_HEADER include/spdk/thread.h 00:02:28.337 TEST_HEADER include/spdk/trace.h 00:02:28.337 TEST_HEADER include/spdk/tree.h 00:02:28.337 TEST_HEADER include/spdk/ublk.h 00:02:28.337 TEST_HEADER include/spdk/trace_parser.h 00:02:28.337 TEST_HEADER include/spdk/util.h 00:02:28.337 TEST_HEADER include/spdk/version.h 00:02:28.337 TEST_HEADER include/spdk/uuid.h 00:02:28.338 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:28.338 TEST_HEADER include/spdk/vhost.h 00:02:28.338 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:28.338 TEST_HEADER include/spdk/xor.h 00:02:28.338 TEST_HEADER include/spdk/vmd.h 00:02:28.338 TEST_HEADER include/spdk/zipf.h 00:02:28.338 CXX test/cpp_headers/accel_module.o 00:02:28.338 CXX test/cpp_headers/accel.o 00:02:28.338 CXX test/cpp_headers/assert.o 00:02:28.338 CXX test/cpp_headers/base64.o 00:02:28.338 CXX test/cpp_headers/bdev.o 00:02:28.338 CXX test/cpp_headers/bdev_module.o 00:02:28.338 CXX test/cpp_headers/barrier.o 00:02:28.338 CXX test/cpp_headers/bdev_zone.o 00:02:28.338 CXX test/cpp_headers/bit_pool.o 00:02:28.338 CXX test/cpp_headers/bit_array.o 00:02:28.338 CXX test/cpp_headers/blobfs.o 00:02:28.338 CXX test/cpp_headers/blob_bdev.o 00:02:28.338 CXX test/cpp_headers/blob.o 00:02:28.338 CXX test/cpp_headers/blobfs_bdev.o 00:02:28.338 CXX test/cpp_headers/conf.o 00:02:28.338 CXX test/cpp_headers/cpuset.o 00:02:28.338 CXX test/cpp_headers/config.o 00:02:28.338 CXX test/cpp_headers/crc16.o 00:02:28.338 CXX test/cpp_headers/crc32.o 00:02:28.338 CXX test/cpp_headers/dif.o 00:02:28.338 CXX test/cpp_headers/crc64.o 00:02:28.338 CXX test/cpp_headers/endian.o 00:02:28.338 CXX test/cpp_headers/dma.o 00:02:28.338 CXX test/cpp_headers/env_dpdk.o 00:02:28.338 CXX test/cpp_headers/env.o 00:02:28.338 CXX test/cpp_headers/event.o 00:02:28.338 CXX test/cpp_headers/fd_group.o 00:02:28.338 CXX test/cpp_headers/fd.o 00:02:28.338 CXX test/cpp_headers/ftl.o 00:02:28.338 CXX test/cpp_headers/gpt_spec.o 00:02:28.338 CXX test/cpp_headers/file.o 00:02:28.603 CXX test/cpp_headers/hexlify.o 00:02:28.603 CXX test/cpp_headers/histogram_data.o 00:02:28.603 CXX test/cpp_headers/idxd.o 00:02:28.603 CXX test/cpp_headers/idxd_spec.o 00:02:28.603 CXX test/cpp_headers/ioat_spec.o 00:02:28.603 CXX test/cpp_headers/ioat.o 00:02:28.603 CXX test/cpp_headers/init.o 00:02:28.603 CXX test/cpp_headers/json.o 00:02:28.603 CXX test/cpp_headers/iscsi_spec.o 00:02:28.603 CXX test/cpp_headers/jsonrpc.o 00:02:28.603 CXX test/cpp_headers/keyring.o 00:02:28.603 CXX test/cpp_headers/likely.o 00:02:28.603 CXX test/cpp_headers/keyring_module.o 00:02:28.603 CXX test/cpp_headers/log.o 00:02:28.603 CXX test/cpp_headers/lvol.o 00:02:28.603 CXX test/cpp_headers/memory.o 00:02:28.603 CXX test/cpp_headers/mmio.o 00:02:28.603 CXX test/cpp_headers/nbd.o 00:02:28.603 CXX test/cpp_headers/notify.o 00:02:28.603 CXX test/cpp_headers/nvme_intel.o 00:02:28.603 CXX test/cpp_headers/nvme.o 00:02:28.603 CXX test/cpp_headers/nvme_ocssd.o 00:02:28.603 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:28.603 CXX test/cpp_headers/nvme_spec.o 00:02:28.603 CXX test/cpp_headers/nvmf_cmd.o 00:02:28.603 CXX test/cpp_headers/nvme_zns.o 00:02:28.603 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:28.603 CXX test/cpp_headers/nvmf.o 00:02:28.603 CXX test/cpp_headers/nvmf_transport.o 00:02:28.603 CXX test/cpp_headers/nvmf_spec.o 00:02:28.603 CXX test/cpp_headers/opal.o 00:02:28.603 CXX test/cpp_headers/opal_spec.o 00:02:28.603 CXX test/cpp_headers/pci_ids.o 00:02:28.603 CXX test/cpp_headers/pipe.o 00:02:28.603 CXX test/cpp_headers/queue.o 00:02:28.603 CXX test/cpp_headers/reduce.o 00:02:28.603 CXX test/cpp_headers/rpc.o 00:02:28.603 CXX test/cpp_headers/scheduler.o 00:02:28.603 CC examples/accel/perf/accel_perf.o 00:02:28.603 CXX test/cpp_headers/scsi.o 00:02:28.603 CC examples/vmd/led/led.o 00:02:28.603 CC examples/nvme/arbitration/arbitration.o 00:02:28.603 CC examples/idxd/perf/perf.o 00:02:28.603 CC test/event/reactor/reactor.o 00:02:28.603 CC examples/nvme/reconnect/reconnect.o 00:02:28.603 CC examples/vmd/lsvmd/lsvmd.o 00:02:28.603 CC examples/blob/cli/blobcli.o 00:02:28.603 CC examples/nvme/hello_world/hello_world.o 00:02:28.603 CC examples/blob/hello_world/hello_blob.o 00:02:28.603 CC test/event/event_perf/event_perf.o 00:02:28.603 CC examples/util/zipf/zipf.o 00:02:28.603 CC examples/ioat/verify/verify.o 00:02:28.603 CC examples/ioat/perf/perf.o 00:02:28.603 CC test/nvme/connect_stress/connect_stress.o 00:02:28.603 CC test/env/vtophys/vtophys.o 00:02:28.603 CC test/nvme/startup/startup.o 00:02:28.603 CC examples/sock/hello_world/hello_sock.o 00:02:28.603 CC test/nvme/aer/aer.o 00:02:28.603 CC examples/nvme/abort/abort.o 00:02:28.603 CC test/nvme/simple_copy/simple_copy.o 00:02:28.603 CC test/app/jsoncat/jsoncat.o 00:02:28.603 CC app/fio/nvme/fio_plugin.o 00:02:28.603 CC test/nvme/overhead/overhead.o 00:02:28.603 CC test/event/reactor_perf/reactor_perf.o 00:02:28.603 CC test/nvme/reset/reset.o 00:02:28.603 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:28.603 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:28.603 CC test/env/memory/memory_ut.o 00:02:28.603 CC test/nvme/err_injection/err_injection.o 00:02:28.603 CC test/app/histogram_perf/histogram_perf.o 00:02:28.603 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:28.603 CC test/dma/test_dma/test_dma.o 00:02:28.603 CC test/thread/poller_perf/poller_perf.o 00:02:28.603 CC test/event/app_repeat/app_repeat.o 00:02:28.603 CC test/app/stub/stub.o 00:02:28.603 CC examples/nvme/hotplug/hotplug.o 00:02:28.603 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:28.603 CC test/nvme/sgl/sgl.o 00:02:28.603 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:28.603 CXX test/cpp_headers/scsi_spec.o 00:02:28.603 CC test/nvme/cuse/cuse.o 00:02:28.603 CC examples/nvmf/nvmf/nvmf.o 00:02:28.603 CC test/env/pci/pci_ut.o 00:02:28.603 CC test/nvme/boot_partition/boot_partition.o 00:02:28.603 CC test/accel/dif/dif.o 00:02:28.603 CC test/nvme/e2edp/nvme_dp.o 00:02:28.603 CC test/nvme/compliance/nvme_compliance.o 00:02:28.603 CC app/fio/bdev/fio_plugin.o 00:02:28.603 CC test/bdev/bdevio/bdevio.o 00:02:28.603 CC examples/bdev/hello_world/hello_bdev.o 00:02:28.603 CC test/nvme/reserve/reserve.o 00:02:28.603 CC test/nvme/fused_ordering/fused_ordering.o 00:02:28.603 LINK spdk_lspci 00:02:28.603 CC examples/bdev/bdevperf/bdevperf.o 00:02:28.603 CC test/nvme/fdp/fdp.o 00:02:28.603 CC test/blobfs/mkfs/mkfs.o 00:02:28.603 CC examples/thread/thread/thread_ex.o 00:02:28.603 CC test/event/scheduler/scheduler.o 00:02:28.603 CC test/app/bdev_svc/bdev_svc.o 00:02:28.875 LINK spdk_nvme_discover 00:02:28.875 LINK rpc_client_test 00:02:28.875 LINK iscsi_tgt 00:02:28.875 LINK vhost 00:02:28.875 LINK interrupt_tgt 00:02:28.875 LINK nvmf_tgt 00:02:29.134 CC test/env/mem_callbacks/mem_callbacks.o 00:02:29.134 CC test/lvol/esnap/esnap.o 00:02:29.134 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:29.134 LINK spdk_trace_record 00:02:29.134 LINK lsvmd 00:02:29.134 LINK led 00:02:29.134 LINK spdk_tgt 00:02:29.134 LINK startup 00:02:29.134 LINK reactor 00:02:29.134 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:29.134 LINK event_perf 00:02:29.134 LINK zipf 00:02:29.134 LINK env_dpdk_post_init 00:02:29.134 LINK histogram_perf 00:02:29.134 LINK jsoncat 00:02:29.134 LINK poller_perf 00:02:29.134 LINK reactor_perf 00:02:29.392 LINK vtophys 00:02:29.392 LINK err_injection 00:02:29.392 CXX test/cpp_headers/sock.o 00:02:29.392 CXX test/cpp_headers/stdinc.o 00:02:29.392 CXX test/cpp_headers/string.o 00:02:29.392 LINK cmb_copy 00:02:29.392 CXX test/cpp_headers/thread.o 00:02:29.392 LINK doorbell_aers 00:02:29.392 CXX test/cpp_headers/trace.o 00:02:29.392 CXX test/cpp_headers/trace_parser.o 00:02:29.392 LINK connect_stress 00:02:29.392 CXX test/cpp_headers/tree.o 00:02:29.392 CXX test/cpp_headers/ublk.o 00:02:29.392 CXX test/cpp_headers/util.o 00:02:29.392 CXX test/cpp_headers/uuid.o 00:02:29.392 CXX test/cpp_headers/version.o 00:02:29.392 CXX test/cpp_headers/vfio_user_pci.o 00:02:29.392 CXX test/cpp_headers/vfio_user_spec.o 00:02:29.392 LINK stub 00:02:29.392 CXX test/cpp_headers/vhost.o 00:02:29.392 LINK hello_sock 00:02:29.392 LINK pmr_persistence 00:02:29.392 CXX test/cpp_headers/vmd.o 00:02:29.392 LINK app_repeat 00:02:29.392 CXX test/cpp_headers/xor.o 00:02:29.392 CXX test/cpp_headers/zipf.o 00:02:29.392 LINK simple_copy 00:02:29.392 LINK reserve 00:02:29.392 LINK boot_partition 00:02:29.392 LINK hello_world 00:02:29.392 LINK fused_ordering 00:02:29.392 LINK verify 00:02:29.392 LINK ioat_perf 00:02:29.392 LINK spdk_dd 00:02:29.392 LINK hello_blob 00:02:29.392 LINK sgl 00:02:29.392 LINK scheduler 00:02:29.392 LINK reset 00:02:29.392 LINK hotplug 00:02:29.392 LINK mkfs 00:02:29.392 LINK overhead 00:02:29.392 LINK bdev_svc 00:02:29.392 LINK reconnect 00:02:29.392 LINK aer 00:02:29.392 LINK hello_bdev 00:02:29.392 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:29.392 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:29.392 LINK nvmf 00:02:29.392 LINK arbitration 00:02:29.392 LINK nvme_dp 00:02:29.392 LINK nvme_compliance 00:02:29.392 LINK thread 00:02:29.393 LINK abort 00:02:29.393 LINK idxd_perf 00:02:29.652 LINK test_dma 00:02:29.652 LINK fdp 00:02:29.652 LINK accel_perf 00:02:29.652 LINK dif 00:02:29.652 LINK pci_ut 00:02:29.652 LINK bdevio 00:02:29.652 LINK spdk_trace 00:02:29.652 LINK blobcli 00:02:29.652 LINK spdk_nvme 00:02:29.652 LINK spdk_bdev 00:02:29.652 LINK nvme_manage 00:02:29.652 LINK nvme_fuzz 00:02:29.652 LINK spdk_nvme_perf 00:02:29.913 LINK vhost_fuzz 00:02:29.913 LINK spdk_nvme_identify 00:02:29.913 LINK mem_callbacks 00:02:29.913 LINK spdk_top 00:02:29.913 LINK bdevperf 00:02:30.174 LINK memory_ut 00:02:30.174 LINK cuse 00:02:30.746 LINK iscsi_fuzz 00:02:33.290 LINK esnap 00:02:33.551 00:02:33.551 real 0m49.990s 00:02:33.551 user 6m41.378s 00:02:33.551 sys 4m43.649s 00:02:33.551 19:17:59 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:33.551 19:17:59 make -- common/autotest_common.sh@10 -- $ set +x 00:02:33.551 ************************************ 00:02:33.551 END TEST make 00:02:33.551 ************************************ 00:02:33.551 19:17:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:33.551 19:17:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:33.551 19:17:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:33.551 19:17:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.551 19:17:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:33.551 19:17:59 -- pm/common@44 -- $ pid=3236982 00:02:33.551 19:17:59 -- pm/common@50 -- $ kill -TERM 3236982 00:02:33.552 19:17:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.552 19:17:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:33.552 19:17:59 -- pm/common@44 -- $ pid=3236983 00:02:33.552 19:17:59 -- pm/common@50 -- $ kill -TERM 3236983 00:02:33.552 19:17:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.552 19:17:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:33.552 19:17:59 -- pm/common@44 -- $ pid=3236985 00:02:33.552 19:17:59 -- pm/common@50 -- $ kill -TERM 3236985 00:02:33.552 19:17:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.552 19:17:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:33.552 19:17:59 -- pm/common@44 -- $ pid=3237012 00:02:33.552 19:17:59 -- pm/common@50 -- $ sudo -E kill -TERM 3237012 00:02:33.813 19:17:59 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:33.813 19:17:59 -- nvmf/common.sh@7 -- # uname -s 00:02:33.813 19:17:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:33.813 19:17:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:33.813 19:17:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:33.813 19:17:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:33.813 19:17:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:33.813 19:17:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:33.813 19:17:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:33.813 19:17:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:33.813 19:17:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:33.813 19:17:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:33.813 19:17:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:33.813 19:17:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:33.813 19:17:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:33.813 19:17:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:33.813 19:17:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:33.813 19:17:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:33.813 19:17:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:33.813 19:17:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:33.813 19:17:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:33.813 19:17:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:33.813 19:17:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.813 19:17:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.813 19:17:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.813 19:17:59 -- paths/export.sh@5 -- # export PATH 00:02:33.813 19:17:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.813 19:17:59 -- nvmf/common.sh@47 -- # : 0 00:02:33.813 19:17:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:33.813 19:17:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:33.813 19:17:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:33.813 19:17:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:33.813 19:17:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:33.813 19:17:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:33.813 19:17:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:33.813 19:17:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:33.814 19:17:59 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:33.814 19:17:59 -- spdk/autotest.sh@32 -- # uname -s 00:02:33.814 19:17:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:33.814 19:17:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:33.814 19:17:59 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:33.814 19:17:59 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:33.814 19:17:59 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:33.814 19:17:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:33.814 19:17:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:33.814 19:17:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:33.814 19:17:59 -- spdk/autotest.sh@48 -- # udevadm_pid=3299180 00:02:33.814 19:17:59 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:33.814 19:17:59 -- pm/common@17 -- # local monitor 00:02:33.814 19:17:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:33.814 19:17:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.814 19:17:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.814 19:17:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.814 19:17:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.814 19:17:59 -- pm/common@21 -- # date +%s 00:02:33.814 19:17:59 -- pm/common@25 -- # sleep 1 00:02:33.814 19:17:59 -- pm/common@21 -- # date +%s 00:02:33.814 19:17:59 -- pm/common@21 -- # date +%s 00:02:33.814 19:17:59 -- pm/common@21 -- # date +%s 00:02:33.814 19:17:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715793479 00:02:33.814 19:17:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715793479 00:02:33.814 19:17:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715793479 00:02:33.814 19:17:59 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715793479 00:02:33.814 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715793479_collect-vmstat.pm.log 00:02:33.814 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715793479_collect-cpu-load.pm.log 00:02:33.814 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715793479_collect-cpu-temp.pm.log 00:02:33.814 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715793479_collect-bmc-pm.bmc.pm.log 00:02:34.763 19:18:00 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:34.763 19:18:00 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:34.763 19:18:00 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:34.763 19:18:00 -- common/autotest_common.sh@10 -- # set +x 00:02:34.763 19:18:00 -- spdk/autotest.sh@59 -- # create_test_list 00:02:34.763 19:18:00 -- common/autotest_common.sh@744 -- # xtrace_disable 00:02:34.763 19:18:00 -- common/autotest_common.sh@10 -- # set +x 00:02:34.763 19:18:00 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:34.763 19:18:00 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:34.763 19:18:00 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:34.763 19:18:00 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:34.763 19:18:00 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:34.763 19:18:00 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:34.763 19:18:00 -- common/autotest_common.sh@1451 -- # uname 00:02:34.763 19:18:00 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:02:34.763 19:18:00 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:34.763 19:18:00 -- common/autotest_common.sh@1471 -- # uname 00:02:34.763 19:18:00 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:02:34.763 19:18:00 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:34.763 19:18:00 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:34.763 19:18:00 -- spdk/autotest.sh@72 -- # hash lcov 00:02:34.763 19:18:00 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:34.763 19:18:00 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:34.763 --rc lcov_branch_coverage=1 00:02:34.763 --rc lcov_function_coverage=1 00:02:34.763 --rc genhtml_branch_coverage=1 00:02:34.763 --rc genhtml_function_coverage=1 00:02:34.763 --rc genhtml_legend=1 00:02:34.763 --rc geninfo_all_blocks=1 00:02:34.763 ' 00:02:34.763 19:18:00 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:34.763 --rc lcov_branch_coverage=1 00:02:34.763 --rc lcov_function_coverage=1 00:02:34.763 --rc genhtml_branch_coverage=1 00:02:34.763 --rc genhtml_function_coverage=1 00:02:34.763 --rc genhtml_legend=1 00:02:34.763 --rc geninfo_all_blocks=1 00:02:34.763 ' 00:02:34.763 19:18:00 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:34.763 --rc lcov_branch_coverage=1 00:02:34.763 --rc lcov_function_coverage=1 00:02:34.763 --rc genhtml_branch_coverage=1 00:02:34.763 --rc genhtml_function_coverage=1 00:02:34.763 --rc genhtml_legend=1 00:02:34.763 --rc geninfo_all_blocks=1 00:02:34.763 --no-external' 00:02:34.763 19:18:00 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:34.763 --rc lcov_branch_coverage=1 00:02:34.763 --rc lcov_function_coverage=1 00:02:34.763 --rc genhtml_branch_coverage=1 00:02:34.763 --rc genhtml_function_coverage=1 00:02:34.763 --rc genhtml_legend=1 00:02:34.763 --rc geninfo_all_blocks=1 00:02:34.763 --no-external' 00:02:34.763 19:18:00 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:35.071 lcov: LCOV version 1.14 00:02:35.071 19:18:01 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:47.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:47.310 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:47.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:47.310 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:47.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:47.310 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:47.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:47.310 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:02.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:02.226 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:02.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:02.226 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:02.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:02.226 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:02.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:02.226 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:02.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:02.226 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:02.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:02.226 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:02.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:02.226 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:02.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:02.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:02.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:02.228 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:04.145 19:18:29 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:04.145 19:18:29 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:04.145 19:18:29 -- common/autotest_common.sh@10 -- # set +x 00:03:04.145 19:18:29 -- spdk/autotest.sh@91 -- # rm -f 00:03:04.145 19:18:29 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.455 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:07.455 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:07.455 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:07.455 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:07.455 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:07.455 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:07.455 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:07.717 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:07.717 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:07.717 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:07.717 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:07.717 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:07.717 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:07.717 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:07.717 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:07.717 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:07.717 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:07.977 19:18:34 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:07.977 19:18:34 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:07.977 19:18:34 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:07.977 19:18:34 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:07.977 19:18:34 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:07.977 19:18:34 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:07.977 19:18:34 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:07.978 19:18:34 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:07.978 19:18:34 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:07.978 19:18:34 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:07.978 19:18:34 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:07.978 19:18:34 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:07.978 19:18:34 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:07.978 19:18:34 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:07.978 19:18:34 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:08.238 No valid GPT data, bailing 00:03:08.238 19:18:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:08.238 19:18:34 -- scripts/common.sh@391 -- # pt= 00:03:08.238 19:18:34 -- scripts/common.sh@392 -- # return 1 00:03:08.238 19:18:34 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:08.238 1+0 records in 00:03:08.238 1+0 records out 00:03:08.238 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00176286 s, 595 MB/s 00:03:08.238 19:18:34 -- spdk/autotest.sh@118 -- # sync 00:03:08.238 19:18:34 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:08.238 19:18:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:08.238 19:18:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:16.381 19:18:42 -- spdk/autotest.sh@124 -- # uname -s 00:03:16.381 19:18:42 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:16.382 19:18:42 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:16.382 19:18:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:16.382 19:18:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:16.382 19:18:42 -- common/autotest_common.sh@10 -- # set +x 00:03:16.382 ************************************ 00:03:16.382 START TEST setup.sh 00:03:16.382 ************************************ 00:03:16.382 19:18:42 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:16.643 * Looking for test storage... 00:03:16.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:16.643 19:18:42 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:16.643 19:18:42 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:16.643 19:18:42 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:16.643 19:18:42 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:16.643 19:18:42 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:16.643 19:18:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:16.643 ************************************ 00:03:16.643 START TEST acl 00:03:16.643 ************************************ 00:03:16.643 19:18:42 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:16.643 * Looking for test storage... 00:03:16.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:16.643 19:18:42 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:16.643 19:18:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:16.643 19:18:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:16.643 19:18:42 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:16.643 19:18:42 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:16.643 19:18:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:16.643 19:18:42 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:16.643 19:18:42 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:16.643 19:18:42 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:16.643 19:18:42 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:16.643 19:18:42 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:16.643 19:18:42 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:16.643 19:18:42 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:16.643 19:18:42 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:16.643 19:18:42 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.643 19:18:42 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.849 19:18:47 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:20.849 19:18:47 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:20.849 19:18:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.849 19:18:47 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:20.849 19:18:47 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.849 19:18:47 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:25.052 Hugepages 00:03:25.052 node hugesize free / total 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.052 00:03:25.052 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.052 19:18:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:25.052 19:18:51 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:25.052 19:18:51 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:25.052 19:18:51 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:25.052 19:18:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:25.052 ************************************ 00:03:25.052 START TEST denied 00:03:25.052 ************************************ 00:03:25.052 19:18:51 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:03:25.052 19:18:51 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:25.052 19:18:51 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:25.052 19:18:51 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:25.052 19:18:51 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.052 19:18:51 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:30.342 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:30.342 19:18:55 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:30.342 19:18:55 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:30.342 19:18:55 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:30.342 19:18:55 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:30.342 19:18:55 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:30.342 19:18:55 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:30.342 19:18:55 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:30.342 19:18:55 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:30.342 19:18:55 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.342 19:18:55 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:35.631 00:03:35.631 real 0m9.609s 00:03:35.631 user 0m2.962s 00:03:35.631 sys 0m5.782s 00:03:35.631 19:19:00 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:35.631 19:19:00 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:35.631 ************************************ 00:03:35.631 END TEST denied 00:03:35.631 ************************************ 00:03:35.631 19:19:00 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:35.631 19:19:00 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:35.631 19:19:00 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:35.631 19:19:00 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:35.631 ************************************ 00:03:35.631 START TEST allowed 00:03:35.631 ************************************ 00:03:35.631 19:19:00 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:03:35.631 19:19:00 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:35.631 19:19:00 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:35.631 19:19:00 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:35.631 19:19:00 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.631 19:19:00 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:40.921 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:40.921 19:19:06 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:40.921 19:19:06 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:40.921 19:19:06 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:40.921 19:19:06 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.921 19:19:06 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.257 00:03:45.257 real 0m10.273s 00:03:45.257 user 0m2.860s 00:03:45.257 sys 0m5.706s 00:03:45.257 19:19:11 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:45.257 19:19:11 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:45.257 ************************************ 00:03:45.257 END TEST allowed 00:03:45.257 ************************************ 00:03:45.257 00:03:45.257 real 0m28.537s 00:03:45.257 user 0m8.946s 00:03:45.257 sys 0m17.199s 00:03:45.257 19:19:11 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:45.257 19:19:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:45.257 ************************************ 00:03:45.257 END TEST acl 00:03:45.257 ************************************ 00:03:45.257 19:19:11 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:45.257 19:19:11 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:45.257 19:19:11 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:45.257 19:19:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:45.257 ************************************ 00:03:45.257 START TEST hugepages 00:03:45.257 ************************************ 00:03:45.257 19:19:11 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:45.257 * Looking for test storage... 00:03:45.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 102756644 kB' 'MemAvailable: 107240128 kB' 'Buffers: 4144 kB' 'Cached: 14600164 kB' 'SwapCached: 0 kB' 'Active: 10728624 kB' 'Inactive: 4481572 kB' 'Active(anon): 10088760 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609360 kB' 'Mapped: 222528 kB' 'Shmem: 9482872 kB' 'KReclaimable: 372756 kB' 'Slab: 1254848 kB' 'SReclaimable: 372756 kB' 'SUnreclaim: 882092 kB' 'KernelStack: 27616 kB' 'PageTables: 9628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460888 kB' 'Committed_AS: 11534236 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237832 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.257 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.258 19:19:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.259 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.519 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.519 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.519 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.519 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.519 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.519 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:45.519 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:45.519 19:19:11 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:45.519 19:19:11 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:45.519 19:19:11 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:45.519 19:19:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.519 ************************************ 00:03:45.519 START TEST default_setup 00:03:45.519 ************************************ 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.519 19:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:49.729 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:49.729 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:49.729 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:49.729 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:49.729 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:49.729 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:49.729 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:49.729 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:49.729 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:49.729 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:49.729 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:49.729 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:49.729 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:49.729 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:49.729 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:49.729 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:49.729 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104908916 kB' 'MemAvailable: 109392352 kB' 'Buffers: 4144 kB' 'Cached: 14600300 kB' 'SwapCached: 0 kB' 'Active: 10742644 kB' 'Inactive: 4481572 kB' 'Active(anon): 10102780 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623212 kB' 'Mapped: 222208 kB' 'Shmem: 9483008 kB' 'KReclaimable: 372660 kB' 'Slab: 1253064 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 880404 kB' 'KernelStack: 27392 kB' 'PageTables: 9324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11544688 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237752 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.998 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104909128 kB' 'MemAvailable: 109392564 kB' 'Buffers: 4144 kB' 'Cached: 14600304 kB' 'SwapCached: 0 kB' 'Active: 10743024 kB' 'Inactive: 4481572 kB' 'Active(anon): 10103160 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623620 kB' 'Mapped: 222208 kB' 'Shmem: 9483012 kB' 'KReclaimable: 372660 kB' 'Slab: 1253064 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 880404 kB' 'KernelStack: 27408 kB' 'PageTables: 9380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11544708 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237736 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.999 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.000 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104909980 kB' 'MemAvailable: 109393416 kB' 'Buffers: 4144 kB' 'Cached: 14600320 kB' 'SwapCached: 0 kB' 'Active: 10742400 kB' 'Inactive: 4481572 kB' 'Active(anon): 10102536 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622864 kB' 'Mapped: 222100 kB' 'Shmem: 9483028 kB' 'KReclaimable: 372660 kB' 'Slab: 1253064 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 880404 kB' 'KernelStack: 27392 kB' 'PageTables: 9316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11544728 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237736 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.001 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.002 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.003 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:50.004 nr_hugepages=1024 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.004 resv_hugepages=0 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.004 surplus_hugepages=0 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.004 anon_hugepages=0 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.004 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104910420 kB' 'MemAvailable: 109393856 kB' 'Buffers: 4144 kB' 'Cached: 14600340 kB' 'SwapCached: 0 kB' 'Active: 10742384 kB' 'Inactive: 4481572 kB' 'Active(anon): 10102520 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622868 kB' 'Mapped: 222100 kB' 'Shmem: 9483048 kB' 'KReclaimable: 372660 kB' 'Slab: 1253064 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 880404 kB' 'KernelStack: 27392 kB' 'PageTables: 9316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11544748 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237736 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.005 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.006 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:50.007 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57819752 kB' 'MemUsed: 7839256 kB' 'SwapCached: 0 kB' 'Active: 3977924 kB' 'Inactive: 156628 kB' 'Active(anon): 3813152 kB' 'Inactive(anon): 0 kB' 'Active(file): 164772 kB' 'Inactive(file): 156628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3973820 kB' 'Mapped: 54652 kB' 'AnonPages: 163976 kB' 'Shmem: 3652420 kB' 'KernelStack: 12200 kB' 'PageTables: 3508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167976 kB' 'Slab: 578928 kB' 'SReclaimable: 167976 kB' 'SUnreclaim: 410952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.008 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.009 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.272 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.272 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.272 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.272 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.272 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.272 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.272 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.272 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.272 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.272 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.272 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.272 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.272 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.272 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:50.272 19:19:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:50.272 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.272 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.272 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.272 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.272 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:50.272 node0=1024 expecting 1024 00:03:50.273 19:19:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:50.273 00:03:50.273 real 0m4.682s 00:03:50.273 user 0m1.742s 00:03:50.273 sys 0m2.972s 00:03:50.273 19:19:16 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:50.273 19:19:16 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:50.273 ************************************ 00:03:50.273 END TEST default_setup 00:03:50.273 ************************************ 00:03:50.273 19:19:16 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:50.273 19:19:16 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:50.273 19:19:16 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:50.273 19:19:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:50.273 ************************************ 00:03:50.273 START TEST per_node_1G_alloc 00:03:50.273 ************************************ 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.273 19:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:54.489 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:54.489 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:54.489 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:54.489 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:54.489 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:54.489 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:54.489 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:54.489 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:54.489 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:54.489 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:54.489 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:54.489 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:54.489 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:54.489 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:54.489 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:54.489 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:54.489 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104954492 kB' 'MemAvailable: 109437928 kB' 'Buffers: 4144 kB' 'Cached: 14600480 kB' 'SwapCached: 0 kB' 'Active: 10745464 kB' 'Inactive: 4481572 kB' 'Active(anon): 10105600 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624724 kB' 'Mapped: 221116 kB' 'Shmem: 9483188 kB' 'KReclaimable: 372660 kB' 'Slab: 1253356 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 880696 kB' 'KernelStack: 27536 kB' 'PageTables: 9340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11539428 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237976 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.489 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.490 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104959408 kB' 'MemAvailable: 109442844 kB' 'Buffers: 4144 kB' 'Cached: 14600484 kB' 'SwapCached: 0 kB' 'Active: 10744628 kB' 'Inactive: 4481572 kB' 'Active(anon): 10104764 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624880 kB' 'Mapped: 221008 kB' 'Shmem: 9483192 kB' 'KReclaimable: 372660 kB' 'Slab: 1253364 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 880704 kB' 'KernelStack: 27440 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11539696 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237880 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.491 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.492 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104959880 kB' 'MemAvailable: 109443316 kB' 'Buffers: 4144 kB' 'Cached: 14600500 kB' 'SwapCached: 0 kB' 'Active: 10744016 kB' 'Inactive: 4481572 kB' 'Active(anon): 10104152 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624268 kB' 'Mapped: 221008 kB' 'Shmem: 9483208 kB' 'KReclaimable: 372660 kB' 'Slab: 1253364 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 880704 kB' 'KernelStack: 27248 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11538536 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237816 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.493 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:54.494 nr_hugepages=1024 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.494 resv_hugepages=0 00:03:54.494 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.494 surplus_hugepages=0 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.495 anon_hugepages=0 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104958924 kB' 'MemAvailable: 109442360 kB' 'Buffers: 4144 kB' 'Cached: 14600524 kB' 'SwapCached: 0 kB' 'Active: 10747288 kB' 'Inactive: 4481572 kB' 'Active(anon): 10107424 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627568 kB' 'Mapped: 221512 kB' 'Shmem: 9483232 kB' 'KReclaimable: 372660 kB' 'Slab: 1253052 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 880392 kB' 'KernelStack: 27408 kB' 'PageTables: 9140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11543736 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237896 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.495 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.496 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58902980 kB' 'MemUsed: 6756028 kB' 'SwapCached: 0 kB' 'Active: 3983756 kB' 'Inactive: 156628 kB' 'Active(anon): 3818984 kB' 'Inactive(anon): 0 kB' 'Active(file): 164772 kB' 'Inactive(file): 156628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3973928 kB' 'Mapped: 54652 kB' 'AnonPages: 169708 kB' 'Shmem: 3652528 kB' 'KernelStack: 12184 kB' 'PageTables: 3316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167976 kB' 'Slab: 578784 kB' 'SReclaimable: 167976 kB' 'SUnreclaim: 410808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.497 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679868 kB' 'MemFree: 46053808 kB' 'MemUsed: 14626060 kB' 'SwapCached: 0 kB' 'Active: 6766064 kB' 'Inactive: 4324944 kB' 'Active(anon): 6290972 kB' 'Inactive(anon): 0 kB' 'Active(file): 475092 kB' 'Inactive(file): 4324944 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10630764 kB' 'Mapped: 166860 kB' 'AnonPages: 460432 kB' 'Shmem: 5830728 kB' 'KernelStack: 15336 kB' 'PageTables: 5820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204684 kB' 'Slab: 674268 kB' 'SReclaimable: 204684 kB' 'SUnreclaim: 469584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.498 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:54.499 node0=512 expecting 512 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:54.499 node1=512 expecting 512 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:54.499 00:03:54.499 real 0m4.325s 00:03:54.499 user 0m1.696s 00:03:54.499 sys 0m2.674s 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:54.499 19:19:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:54.499 ************************************ 00:03:54.499 END TEST per_node_1G_alloc 00:03:54.499 ************************************ 00:03:54.499 19:19:20 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:54.499 19:19:20 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:54.499 19:19:20 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:54.499 19:19:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.761 ************************************ 00:03:54.761 START TEST even_2G_alloc 00:03:54.761 ************************************ 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.761 19:19:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:58.971 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:58.971 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:58.971 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:58.971 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:58.971 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:58.971 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:58.971 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:58.971 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:58.971 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:58.971 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:58.971 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:58.971 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:58.971 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:58.971 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:58.971 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:58.971 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:58.971 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:58.971 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:58.971 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:58.971 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.971 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.971 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:58.971 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:58.971 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:58.971 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.971 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.971 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.971 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:58.971 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.971 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.971 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.971 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.971 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.971 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.971 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.971 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.971 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104958272 kB' 'MemAvailable: 109441708 kB' 'Buffers: 4144 kB' 'Cached: 14600656 kB' 'SwapCached: 0 kB' 'Active: 10742592 kB' 'Inactive: 4481572 kB' 'Active(anon): 10102728 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622616 kB' 'Mapped: 221500 kB' 'Shmem: 9483364 kB' 'KReclaimable: 372660 kB' 'Slab: 1252148 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 879488 kB' 'KernelStack: 27328 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11539132 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237640 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.972 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104958776 kB' 'MemAvailable: 109442212 kB' 'Buffers: 4144 kB' 'Cached: 14600672 kB' 'SwapCached: 0 kB' 'Active: 10743988 kB' 'Inactive: 4481572 kB' 'Active(anon): 10104124 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624000 kB' 'Mapped: 221500 kB' 'Shmem: 9483380 kB' 'KReclaimable: 372660 kB' 'Slab: 1252148 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 879488 kB' 'KernelStack: 27360 kB' 'PageTables: 9180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11539944 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237640 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.973 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.974 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104951852 kB' 'MemAvailable: 109435288 kB' 'Buffers: 4144 kB' 'Cached: 14600692 kB' 'SwapCached: 0 kB' 'Active: 10747624 kB' 'Inactive: 4481572 kB' 'Active(anon): 10107760 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 628152 kB' 'Mapped: 221496 kB' 'Shmem: 9483400 kB' 'KReclaimable: 372660 kB' 'Slab: 1252164 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 879504 kB' 'KernelStack: 27360 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11544200 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237628 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.975 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.976 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:58.977 nr_hugepages=1024 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.977 resv_hugepages=0 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.977 surplus_hugepages=0 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.977 anon_hugepages=0 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104952116 kB' 'MemAvailable: 109435552 kB' 'Buffers: 4144 kB' 'Cached: 14600716 kB' 'SwapCached: 0 kB' 'Active: 10748112 kB' 'Inactive: 4481572 kB' 'Active(anon): 10108248 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 628148 kB' 'Mapped: 221804 kB' 'Shmem: 9483424 kB' 'KReclaimable: 372660 kB' 'Slab: 1252164 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 879504 kB' 'KernelStack: 27360 kB' 'PageTables: 9180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11544224 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237628 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.977 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:58.978 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58882188 kB' 'MemUsed: 6776820 kB' 'SwapCached: 0 kB' 'Active: 3981432 kB' 'Inactive: 156628 kB' 'Active(anon): 3816660 kB' 'Inactive(anon): 0 kB' 'Active(file): 164772 kB' 'Inactive(file): 156628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3974060 kB' 'Mapped: 54872 kB' 'AnonPages: 167164 kB' 'Shmem: 3652660 kB' 'KernelStack: 12168 kB' 'PageTables: 3296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167976 kB' 'Slab: 578112 kB' 'SReclaimable: 167976 kB' 'SUnreclaim: 410136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.979 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679868 kB' 'MemFree: 46070180 kB' 'MemUsed: 14609688 kB' 'SwapCached: 0 kB' 'Active: 6766720 kB' 'Inactive: 4324944 kB' 'Active(anon): 6291628 kB' 'Inactive(anon): 0 kB' 'Active(file): 475092 kB' 'Inactive(file): 4324944 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10630816 kB' 'Mapped: 166996 kB' 'AnonPages: 460952 kB' 'Shmem: 5830780 kB' 'KernelStack: 15176 kB' 'PageTables: 5876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204684 kB' 'Slab: 674052 kB' 'SReclaimable: 204684 kB' 'SUnreclaim: 469368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.980 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.981 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:58.982 node0=512 expecting 512 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:58.982 node1=512 expecting 512 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:58.982 00:03:58.982 real 0m4.276s 00:03:58.982 user 0m1.553s 00:03:58.982 sys 0m2.765s 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:58.982 19:19:24 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:58.982 ************************************ 00:03:58.982 END TEST even_2G_alloc 00:03:58.982 ************************************ 00:03:58.982 19:19:24 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:58.982 19:19:24 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:58.982 19:19:24 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:58.982 19:19:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.982 ************************************ 00:03:58.982 START TEST odd_alloc 00:03:58.982 ************************************ 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.982 19:19:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.197 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:03.197 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:03.197 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:03.197 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:03.197 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:03.197 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:03.197 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:03.197 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:03.197 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:03.197 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:03.197 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:03.197 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:03.197 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:03.197 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:03.197 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:03.197 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:03.197 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:03.197 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104980844 kB' 'MemAvailable: 109464280 kB' 'Buffers: 4144 kB' 'Cached: 14600856 kB' 'SwapCached: 0 kB' 'Active: 10745576 kB' 'Inactive: 4481572 kB' 'Active(anon): 10105712 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624948 kB' 'Mapped: 221092 kB' 'Shmem: 9483564 kB' 'KReclaimable: 372660 kB' 'Slab: 1252472 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 879812 kB' 'KernelStack: 27536 kB' 'PageTables: 9328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508440 kB' 'Committed_AS: 11541936 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237880 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.198 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104981752 kB' 'MemAvailable: 109465188 kB' 'Buffers: 4144 kB' 'Cached: 14600860 kB' 'SwapCached: 0 kB' 'Active: 10744616 kB' 'Inactive: 4481572 kB' 'Active(anon): 10104752 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624524 kB' 'Mapped: 221044 kB' 'Shmem: 9483568 kB' 'KReclaimable: 372660 kB' 'Slab: 1252480 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 879820 kB' 'KernelStack: 27552 kB' 'PageTables: 9748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508440 kB' 'Committed_AS: 11541952 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237944 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.199 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.200 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104983400 kB' 'MemAvailable: 109466836 kB' 'Buffers: 4144 kB' 'Cached: 14600876 kB' 'SwapCached: 0 kB' 'Active: 10744644 kB' 'Inactive: 4481572 kB' 'Active(anon): 10104780 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624504 kB' 'Mapped: 221044 kB' 'Shmem: 9483584 kB' 'KReclaimable: 372660 kB' 'Slab: 1252480 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 879820 kB' 'KernelStack: 27376 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508440 kB' 'Committed_AS: 11541976 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237960 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.201 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.202 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:03.203 nr_hugepages=1025 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.203 resv_hugepages=0 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.203 surplus_hugepages=0 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.203 anon_hugepages=0 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104983640 kB' 'MemAvailable: 109467076 kB' 'Buffers: 4144 kB' 'Cached: 14600896 kB' 'SwapCached: 0 kB' 'Active: 10744904 kB' 'Inactive: 4481572 kB' 'Active(anon): 10105040 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624784 kB' 'Mapped: 221044 kB' 'Shmem: 9483604 kB' 'KReclaimable: 372660 kB' 'Slab: 1252512 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 879852 kB' 'KernelStack: 27536 kB' 'PageTables: 9264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508440 kB' 'Committed_AS: 11541996 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238008 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.203 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.204 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58899128 kB' 'MemUsed: 6759880 kB' 'SwapCached: 0 kB' 'Active: 3976424 kB' 'Inactive: 156628 kB' 'Active(anon): 3811652 kB' 'Inactive(anon): 0 kB' 'Active(file): 164772 kB' 'Inactive(file): 156628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3974168 kB' 'Mapped: 54148 kB' 'AnonPages: 162128 kB' 'Shmem: 3652768 kB' 'KernelStack: 12184 kB' 'PageTables: 3320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167976 kB' 'Slab: 578176 kB' 'SReclaimable: 167976 kB' 'SUnreclaim: 410200 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.205 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679868 kB' 'MemFree: 46086028 kB' 'MemUsed: 14593840 kB' 'SwapCached: 0 kB' 'Active: 6768844 kB' 'Inactive: 4324944 kB' 'Active(anon): 6293752 kB' 'Inactive(anon): 0 kB' 'Active(file): 475092 kB' 'Inactive(file): 4324944 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10630872 kB' 'Mapped: 166896 kB' 'AnonPages: 463024 kB' 'Shmem: 5830836 kB' 'KernelStack: 15208 kB' 'PageTables: 5784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204684 kB' 'Slab: 674336 kB' 'SReclaimable: 204684 kB' 'SUnreclaim: 469652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.206 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.207 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:03.208 node0=512 expecting 513 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:03.208 node1=513 expecting 512 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:03.208 00:04:03.208 real 0m4.171s 00:04:03.208 user 0m1.557s 00:04:03.208 sys 0m2.671s 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:03.208 19:19:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:03.208 ************************************ 00:04:03.208 END TEST odd_alloc 00:04:03.208 ************************************ 00:04:03.208 19:19:29 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:03.208 19:19:29 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:03.208 19:19:29 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:03.208 19:19:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.208 ************************************ 00:04:03.208 START TEST custom_alloc 00:04:03.208 ************************************ 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.208 19:19:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:07.437 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:07.437 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:07.437 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:07.437 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:07.437 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:07.437 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:07.437 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:07.437 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:07.437 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:07.437 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:07.437 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:07.437 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:07.437 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:07.437 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:07.437 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:07.437 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:07.437 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 103924560 kB' 'MemAvailable: 108407996 kB' 'Buffers: 4144 kB' 'Cached: 14601044 kB' 'SwapCached: 0 kB' 'Active: 10745936 kB' 'Inactive: 4481572 kB' 'Active(anon): 10106072 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625744 kB' 'Mapped: 221028 kB' 'Shmem: 9483752 kB' 'KReclaimable: 372660 kB' 'Slab: 1252648 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 879988 kB' 'KernelStack: 27504 kB' 'PageTables: 9680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985176 kB' 'Committed_AS: 11540084 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237800 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.437 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.438 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 103924904 kB' 'MemAvailable: 108408340 kB' 'Buffers: 4144 kB' 'Cached: 14601044 kB' 'SwapCached: 0 kB' 'Active: 10746344 kB' 'Inactive: 4481572 kB' 'Active(anon): 10106480 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 626164 kB' 'Mapped: 220964 kB' 'Shmem: 9483752 kB' 'KReclaimable: 372660 kB' 'Slab: 1252616 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 879956 kB' 'KernelStack: 27472 kB' 'PageTables: 9560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985176 kB' 'Committed_AS: 11540100 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237800 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 103925244 kB' 'MemAvailable: 108408680 kB' 'Buffers: 4144 kB' 'Cached: 14601044 kB' 'SwapCached: 0 kB' 'Active: 10745588 kB' 'Inactive: 4481572 kB' 'Active(anon): 10105724 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625320 kB' 'Mapped: 221020 kB' 'Shmem: 9483752 kB' 'KReclaimable: 372660 kB' 'Slab: 1252684 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 880024 kB' 'KernelStack: 27376 kB' 'PageTables: 9200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985176 kB' 'Committed_AS: 11540124 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237800 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.442 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:07.443 nr_hugepages=1536 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.443 resv_hugepages=0 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.443 surplus_hugepages=0 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.443 anon_hugepages=0 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 103925244 kB' 'MemAvailable: 108408680 kB' 'Buffers: 4144 kB' 'Cached: 14601084 kB' 'SwapCached: 0 kB' 'Active: 10744972 kB' 'Inactive: 4481572 kB' 'Active(anon): 10105108 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624632 kB' 'Mapped: 221020 kB' 'Shmem: 9483792 kB' 'KReclaimable: 372660 kB' 'Slab: 1252684 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 880024 kB' 'KernelStack: 27360 kB' 'PageTables: 9144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985176 kB' 'Committed_AS: 11540144 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237800 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.444 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58888756 kB' 'MemUsed: 6770252 kB' 'SwapCached: 0 kB' 'Active: 3976412 kB' 'Inactive: 156628 kB' 'Active(anon): 3811640 kB' 'Inactive(anon): 0 kB' 'Active(file): 164772 kB' 'Inactive(file): 156628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3974312 kB' 'Mapped: 54148 kB' 'AnonPages: 162024 kB' 'Shmem: 3652912 kB' 'KernelStack: 12184 kB' 'PageTables: 3360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167976 kB' 'Slab: 578220 kB' 'SReclaimable: 167976 kB' 'SUnreclaim: 410244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.445 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679868 kB' 'MemFree: 45036652 kB' 'MemUsed: 15643216 kB' 'SwapCached: 0 kB' 'Active: 6769928 kB' 'Inactive: 4324944 kB' 'Active(anon): 6294836 kB' 'Inactive(anon): 0 kB' 'Active(file): 475092 kB' 'Inactive(file): 4324944 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10630940 kB' 'Mapped: 166872 kB' 'AnonPages: 463968 kB' 'Shmem: 5830904 kB' 'KernelStack: 15176 kB' 'PageTables: 5784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204684 kB' 'Slab: 674464 kB' 'SReclaimable: 204684 kB' 'SUnreclaim: 469780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.446 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:07.447 node0=512 expecting 512 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:07.447 node1=1024 expecting 1024 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:07.447 00:04:07.447 real 0m4.291s 00:04:07.447 user 0m1.577s 00:04:07.447 sys 0m2.759s 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:07.447 19:19:33 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:07.447 ************************************ 00:04:07.447 END TEST custom_alloc 00:04:07.447 ************************************ 00:04:07.709 19:19:33 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:07.709 19:19:33 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:07.709 19:19:33 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:07.709 19:19:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.709 ************************************ 00:04:07.709 START TEST no_shrink_alloc 00:04:07.709 ************************************ 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.709 19:19:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:11.922 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:11.922 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:11.922 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:11.922 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:11.922 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:11.922 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:11.922 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:11.922 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:11.922 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:11.922 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:11.922 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:11.922 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:11.922 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:11.922 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:11.922 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:11.922 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:11.922 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:11.922 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:11.922 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:11.922 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.922 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104966068 kB' 'MemAvailable: 109449504 kB' 'Buffers: 4144 kB' 'Cached: 14601216 kB' 'SwapCached: 0 kB' 'Active: 10746992 kB' 'Inactive: 4481572 kB' 'Active(anon): 10107128 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 626052 kB' 'Mapped: 221192 kB' 'Shmem: 9483924 kB' 'KReclaimable: 372660 kB' 'Slab: 1252548 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 879888 kB' 'KernelStack: 27376 kB' 'PageTables: 9232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11540892 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237896 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.923 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104966340 kB' 'MemAvailable: 109449776 kB' 'Buffers: 4144 kB' 'Cached: 14601220 kB' 'SwapCached: 0 kB' 'Active: 10746980 kB' 'Inactive: 4481572 kB' 'Active(anon): 10107116 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 626076 kB' 'Mapped: 221128 kB' 'Shmem: 9483928 kB' 'KReclaimable: 372660 kB' 'Slab: 1252516 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 879856 kB' 'KernelStack: 27376 kB' 'PageTables: 9212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11542272 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237864 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.924 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.925 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104968596 kB' 'MemAvailable: 109452032 kB' 'Buffers: 4144 kB' 'Cached: 14601240 kB' 'SwapCached: 0 kB' 'Active: 10746336 kB' 'Inactive: 4481572 kB' 'Active(anon): 10106472 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625960 kB' 'Mapped: 221052 kB' 'Shmem: 9483948 kB' 'KReclaimable: 372660 kB' 'Slab: 1252516 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 879856 kB' 'KernelStack: 27328 kB' 'PageTables: 9076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11542460 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237864 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.926 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.927 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:11.928 nr_hugepages=1024 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.928 resv_hugepages=0 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.928 surplus_hugepages=0 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.928 anon_hugepages=0 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104969920 kB' 'MemAvailable: 109453356 kB' 'Buffers: 4144 kB' 'Cached: 14601260 kB' 'SwapCached: 0 kB' 'Active: 10746588 kB' 'Inactive: 4481572 kB' 'Active(anon): 10106724 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 626224 kB' 'Mapped: 221052 kB' 'Shmem: 9483968 kB' 'KReclaimable: 372660 kB' 'Slab: 1252516 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 879856 kB' 'KernelStack: 27360 kB' 'PageTables: 9336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11544048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237864 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.928 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.929 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57854340 kB' 'MemUsed: 7804668 kB' 'SwapCached: 0 kB' 'Active: 3976784 kB' 'Inactive: 156628 kB' 'Active(anon): 3812012 kB' 'Inactive(anon): 0 kB' 'Active(file): 164772 kB' 'Inactive(file): 156628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3974452 kB' 'Mapped: 54148 kB' 'AnonPages: 162120 kB' 'Shmem: 3653052 kB' 'KernelStack: 12072 kB' 'PageTables: 3220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167976 kB' 'Slab: 578088 kB' 'SReclaimable: 167976 kB' 'SUnreclaim: 410112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.930 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.931 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.932 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.932 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.932 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.932 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.932 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.932 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.932 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.932 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.932 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.932 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.932 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.932 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:11.932 node0=1024 expecting 1024 00:04:11.932 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:11.932 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:11.932 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:11.932 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:11.932 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.932 19:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:16.144 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:16.145 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:16.145 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:16.145 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:16.145 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:16.145 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:16.145 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:16.145 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:16.145 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:16.145 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:16.145 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:16.145 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:16.145 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:16.145 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:16.145 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:16.145 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:16.145 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:16.145 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104970468 kB' 'MemAvailable: 109453904 kB' 'Buffers: 4144 kB' 'Cached: 14601376 kB' 'SwapCached: 0 kB' 'Active: 10748656 kB' 'Inactive: 4481572 kB' 'Active(anon): 10108792 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627940 kB' 'Mapped: 221128 kB' 'Shmem: 9484084 kB' 'KReclaimable: 372660 kB' 'Slab: 1252604 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 879944 kB' 'KernelStack: 27472 kB' 'PageTables: 9388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11544556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238024 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104970152 kB' 'MemAvailable: 109453588 kB' 'Buffers: 4144 kB' 'Cached: 14601380 kB' 'SwapCached: 0 kB' 'Active: 10747324 kB' 'Inactive: 4481572 kB' 'Active(anon): 10107460 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627012 kB' 'Mapped: 221108 kB' 'Shmem: 9484088 kB' 'KReclaimable: 372660 kB' 'Slab: 1252152 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 879492 kB' 'KernelStack: 27296 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11544552 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237976 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.146 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104970816 kB' 'MemAvailable: 109454252 kB' 'Buffers: 4144 kB' 'Cached: 14601396 kB' 'SwapCached: 0 kB' 'Active: 10748312 kB' 'Inactive: 4481572 kB' 'Active(anon): 10108448 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627616 kB' 'Mapped: 221108 kB' 'Shmem: 9484104 kB' 'KReclaimable: 372660 kB' 'Slab: 1252120 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 879460 kB' 'KernelStack: 27424 kB' 'PageTables: 9252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11544728 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238008 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.148 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:16.150 nr_hugepages=1024 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.150 resv_hugepages=0 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.150 surplus_hugepages=0 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.150 anon_hugepages=0 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104970548 kB' 'MemAvailable: 109453984 kB' 'Buffers: 4144 kB' 'Cached: 14601400 kB' 'SwapCached: 0 kB' 'Active: 10748832 kB' 'Inactive: 4481572 kB' 'Active(anon): 10108968 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 628108 kB' 'Mapped: 221108 kB' 'Shmem: 9484108 kB' 'KReclaimable: 372660 kB' 'Slab: 1252120 kB' 'SReclaimable: 372660 kB' 'SUnreclaim: 879460 kB' 'KernelStack: 27504 kB' 'PageTables: 9652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11544756 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238024 kB' 'VmallocChunk: 0 kB' 'Percpu: 129024 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4439412 kB' 'DirectMap2M: 50814976 kB' 'DirectMap1G: 80740352 kB' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.151 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57878496 kB' 'MemUsed: 7780512 kB' 'SwapCached: 0 kB' 'Active: 3980712 kB' 'Inactive: 156628 kB' 'Active(anon): 3815940 kB' 'Inactive(anon): 0 kB' 'Active(file): 164772 kB' 'Inactive(file): 156628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3974604 kB' 'Mapped: 54148 kB' 'AnonPages: 165888 kB' 'Shmem: 3653204 kB' 'KernelStack: 12216 kB' 'PageTables: 3468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167976 kB' 'Slab: 577656 kB' 'SReclaimable: 167976 kB' 'SUnreclaim: 409680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.152 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.153 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.154 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.154 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.154 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.154 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.154 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.154 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.154 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.154 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.154 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.154 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.154 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.154 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.154 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:16.154 node0=1024 expecting 1024 00:04:16.154 19:19:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:16.154 00:04:16.154 real 0m8.268s 00:04:16.154 user 0m3.054s 00:04:16.154 sys 0m5.314s 00:04:16.154 19:19:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:16.154 19:19:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:16.154 ************************************ 00:04:16.154 END TEST no_shrink_alloc 00:04:16.154 ************************************ 00:04:16.154 19:19:41 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:16.154 19:19:41 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:16.154 19:19:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:16.154 19:19:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:16.154 19:19:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:16.154 19:19:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:16.154 19:19:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:16.154 19:19:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:16.154 19:19:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:16.154 19:19:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:16.154 19:19:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:16.154 19:19:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:16.154 19:19:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:16.154 19:19:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:16.154 00:04:16.154 real 0m30.693s 00:04:16.154 user 0m11.435s 00:04:16.154 sys 0m19.602s 00:04:16.154 19:19:41 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:16.154 19:19:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:16.154 ************************************ 00:04:16.154 END TEST hugepages 00:04:16.154 ************************************ 00:04:16.154 19:19:42 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:16.154 19:19:42 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:16.154 19:19:42 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:16.154 19:19:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:16.154 ************************************ 00:04:16.154 START TEST driver 00:04:16.154 ************************************ 00:04:16.154 19:19:42 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:16.154 * Looking for test storage... 00:04:16.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:16.154 19:19:42 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:16.154 19:19:42 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.154 19:19:42 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:22.740 19:19:47 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:22.740 19:19:47 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:22.740 19:19:47 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:22.740 19:19:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:22.740 ************************************ 00:04:22.740 START TEST guess_driver 00:04:22.740 ************************************ 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:22.740 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:22.740 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:22.740 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:22.740 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:22.740 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:22.740 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:22.740 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:22.740 Looking for driver=vfio-pci 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.740 19:19:47 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.045 19:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.045 19:19:52 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:26.045 19:19:52 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:26.045 19:19:52 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:26.045 19:19:52 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:31.396 00:04:31.396 real 0m9.618s 00:04:31.396 user 0m3.055s 00:04:31.396 sys 0m5.637s 00:04:31.396 19:19:57 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:31.396 19:19:57 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:31.396 ************************************ 00:04:31.396 END TEST guess_driver 00:04:31.396 ************************************ 00:04:31.396 00:04:31.396 real 0m15.350s 00:04:31.396 user 0m4.742s 00:04:31.396 sys 0m8.808s 00:04:31.396 19:19:57 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:31.396 19:19:57 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:31.396 ************************************ 00:04:31.396 END TEST driver 00:04:31.396 ************************************ 00:04:31.396 19:19:57 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:31.396 19:19:57 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:31.396 19:19:57 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:31.396 19:19:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:31.396 ************************************ 00:04:31.396 START TEST devices 00:04:31.396 ************************************ 00:04:31.396 19:19:57 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:31.656 * Looking for test storage... 00:04:31.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:31.657 19:19:57 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:31.657 19:19:57 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:31.657 19:19:57 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:31.657 19:19:57 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.862 19:20:02 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:35.862 19:20:02 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:35.862 19:20:02 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:35.862 19:20:02 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:35.862 19:20:02 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:35.862 19:20:02 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:35.862 19:20:02 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:35.862 19:20:02 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:35.862 19:20:02 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:35.862 19:20:02 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:35.862 19:20:02 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:35.862 19:20:02 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:35.862 19:20:02 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:35.863 19:20:02 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:35.863 19:20:02 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:35.863 19:20:02 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:35.863 19:20:02 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:35.863 19:20:02 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:35.863 19:20:02 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:35.863 19:20:02 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:35.863 19:20:02 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:35.863 19:20:02 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:36.124 No valid GPT data, bailing 00:04:36.124 19:20:02 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:36.124 19:20:02 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:36.124 19:20:02 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:36.124 19:20:02 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:36.124 19:20:02 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:36.124 19:20:02 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:36.124 19:20:02 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:36.124 19:20:02 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:36.124 19:20:02 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:36.124 19:20:02 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:36.124 19:20:02 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:36.124 19:20:02 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:36.124 19:20:02 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:36.124 19:20:02 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:36.124 19:20:02 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:36.124 19:20:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:36.124 ************************************ 00:04:36.124 START TEST nvme_mount 00:04:36.124 ************************************ 00:04:36.124 19:20:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:36.124 19:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:36.124 19:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:36.124 19:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.124 19:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:36.124 19:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:36.124 19:20:02 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:36.124 19:20:02 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:36.124 19:20:02 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:36.124 19:20:02 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:36.124 19:20:02 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:36.124 19:20:02 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:36.124 19:20:02 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:36.124 19:20:02 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:36.124 19:20:02 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:36.124 19:20:02 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:36.124 19:20:02 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:36.124 19:20:02 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:36.124 19:20:02 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:36.124 19:20:02 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:37.065 Creating new GPT entries in memory. 00:04:37.065 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:37.065 other utilities. 00:04:37.065 19:20:03 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:37.065 19:20:03 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:37.065 19:20:03 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:37.065 19:20:03 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:37.065 19:20:03 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:38.448 Creating new GPT entries in memory. 00:04:38.448 The operation has completed successfully. 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3343290 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:38.448 19:20:04 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.449 19:20:04 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:42.656 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.657 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:42.657 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:42.657 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:42.657 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.657 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.657 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.657 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:42.657 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:42.657 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:42.657 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:42.657 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:42.657 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:42.657 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:42.657 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:42.657 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:42.657 19:20:08 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:42.657 19:20:08 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.657 19:20:08 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:42.657 19:20:08 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:42.657 19:20:08 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.918 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:42.918 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:42.918 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:42.918 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.918 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:42.918 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.918 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:42.918 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:42.918 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.918 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.918 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:42.918 19:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.918 19:20:08 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.918 19:20:08 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.127 19:20:12 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:50.433 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.433 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.433 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.433 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.433 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.433 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.433 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.433 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.433 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.433 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.433 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.434 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.707 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.707 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:50.707 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:50.707 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:50.707 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.967 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.967 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.967 19:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:50.967 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:50.967 00:04:50.967 real 0m14.737s 00:04:50.967 user 0m4.479s 00:04:50.967 sys 0m8.112s 00:04:50.967 19:20:16 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.967 19:20:16 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:50.967 ************************************ 00:04:50.967 END TEST nvme_mount 00:04:50.967 ************************************ 00:04:50.967 19:20:16 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:50.967 19:20:16 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:50.967 19:20:16 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.967 19:20:16 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:50.967 ************************************ 00:04:50.967 START TEST dm_mount 00:04:50.967 ************************************ 00:04:50.967 19:20:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:04:50.967 19:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:50.967 19:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:50.967 19:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:50.967 19:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:50.967 19:20:16 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:50.967 19:20:16 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:50.967 19:20:16 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:50.967 19:20:16 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:50.967 19:20:16 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:50.967 19:20:16 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:50.967 19:20:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:50.967 19:20:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.967 19:20:16 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:50.967 19:20:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:50.967 19:20:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.967 19:20:16 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:50.968 19:20:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:50.968 19:20:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.968 19:20:16 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:50.968 19:20:16 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:50.968 19:20:16 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:51.908 Creating new GPT entries in memory. 00:04:51.908 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:51.908 other utilities. 00:04:51.908 19:20:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:51.908 19:20:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:51.908 19:20:18 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:51.908 19:20:18 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:51.908 19:20:18 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:52.850 Creating new GPT entries in memory. 00:04:52.850 The operation has completed successfully. 00:04:52.850 19:20:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:52.850 19:20:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.850 19:20:19 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:52.850 19:20:19 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:52.850 19:20:19 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:54.235 The operation has completed successfully. 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3349135 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-1 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.235 19:20:20 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.445 19:20:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.445 19:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:58.445 19:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:58.445 19:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.445 19:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:58.445 19:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.445 19:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.445 19:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:04:58.445 19:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:58.445 19:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:04:58.445 19:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:58.445 19:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:58.445 19:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:58.445 19:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:58.445 19:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:58.445 19:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.445 19:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:58.445 19:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:58.445 19:20:24 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.445 19:20:24 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:02.649 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:02.649 00:05:02.649 real 0m11.579s 00:05:02.649 user 0m3.119s 00:05:02.649 sys 0m5.510s 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.649 19:20:28 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:02.649 ************************************ 00:05:02.649 END TEST dm_mount 00:05:02.649 ************************************ 00:05:02.649 19:20:28 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:02.649 19:20:28 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:02.649 19:20:28 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.649 19:20:28 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:02.649 19:20:28 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:02.649 19:20:28 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:02.649 19:20:28 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:02.910 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:02.910 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:02.910 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:02.910 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:02.910 19:20:28 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:02.910 19:20:28 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:02.910 19:20:28 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:02.910 19:20:28 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:02.910 19:20:28 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:02.910 19:20:28 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:02.910 19:20:28 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:02.910 00:05:02.910 real 0m31.391s 00:05:02.910 user 0m9.360s 00:05:02.910 sys 0m16.798s 00:05:02.910 19:20:28 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.910 19:20:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:02.910 ************************************ 00:05:02.910 END TEST devices 00:05:02.910 ************************************ 00:05:02.910 00:05:02.910 real 1m46.430s 00:05:02.910 user 0m34.648s 00:05:02.910 sys 1m2.717s 00:05:02.910 19:20:28 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.910 19:20:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:02.910 ************************************ 00:05:02.910 END TEST setup.sh 00:05:02.910 ************************************ 00:05:02.910 19:20:28 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:07.114 Hugepages 00:05:07.114 node hugesize free / total 00:05:07.114 node0 1048576kB 0 / 0 00:05:07.114 node0 2048kB 2048 / 2048 00:05:07.114 node1 1048576kB 0 / 0 00:05:07.114 node1 2048kB 0 / 0 00:05:07.114 00:05:07.114 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:07.114 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:07.114 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:07.114 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:07.114 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:07.114 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:07.114 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:07.114 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:07.114 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:07.114 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:07.114 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:07.114 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:07.114 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:07.114 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:07.114 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:07.114 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:07.114 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:07.114 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:07.114 19:20:33 -- spdk/autotest.sh@130 -- # uname -s 00:05:07.114 19:20:33 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:07.114 19:20:33 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:07.114 19:20:33 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:11.317 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:11.317 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:11.317 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:11.317 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:11.317 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:11.318 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:11.318 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:11.318 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:11.318 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:11.318 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:11.318 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:11.318 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:11.318 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:11.318 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:11.318 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:11.318 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:12.701 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:12.962 19:20:39 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:13.904 19:20:40 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:13.904 19:20:40 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:13.904 19:20:40 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:14.164 19:20:40 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:14.164 19:20:40 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:14.164 19:20:40 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:14.164 19:20:40 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:14.164 19:20:40 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:14.164 19:20:40 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:14.164 19:20:40 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:14.165 19:20:40 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:05:14.165 19:20:40 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:18.472 Waiting for block devices as requested 00:05:18.472 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:18.472 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:18.472 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:18.472 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:18.472 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:18.472 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:18.472 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:18.472 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:18.732 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:18.732 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:18.993 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:18.993 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:18.993 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:19.253 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:19.253 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:19.253 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:19.514 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:19.775 19:20:45 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:19.775 19:20:45 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:19.775 19:20:45 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:19.775 19:20:45 -- common/autotest_common.sh@1498 -- # grep 0000:65:00.0/nvme/nvme 00:05:19.775 19:20:45 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:19.775 19:20:45 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:19.775 19:20:45 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:19.775 19:20:45 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:19.775 19:20:45 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:19.775 19:20:45 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:19.775 19:20:45 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:19.775 19:20:45 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:19.775 19:20:45 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:19.775 19:20:45 -- common/autotest_common.sh@1541 -- # oacs=' 0x5f' 00:05:19.775 19:20:45 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:19.775 19:20:45 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:19.775 19:20:45 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:19.775 19:20:45 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:19.775 19:20:45 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:19.775 19:20:45 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:19.775 19:20:45 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:19.775 19:20:45 -- common/autotest_common.sh@1553 -- # continue 00:05:19.775 19:20:45 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:19.775 19:20:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.775 19:20:45 -- common/autotest_common.sh@10 -- # set +x 00:05:19.775 19:20:45 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:19.775 19:20:45 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:19.775 19:20:45 -- common/autotest_common.sh@10 -- # set +x 00:05:19.775 19:20:45 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:23.981 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:23.981 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:23.981 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:23.981 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:23.981 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:23.981 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:23.981 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:23.981 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:23.981 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:23.981 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:23.981 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:23.981 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:23.981 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:23.981 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:23.981 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:23.981 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:23.981 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:24.242 19:20:50 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:24.242 19:20:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.242 19:20:50 -- common/autotest_common.sh@10 -- # set +x 00:05:24.242 19:20:50 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:24.242 19:20:50 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:24.242 19:20:50 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:24.242 19:20:50 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:24.242 19:20:50 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:24.242 19:20:50 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:24.242 19:20:50 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:24.242 19:20:50 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:24.242 19:20:50 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:24.242 19:20:50 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:24.242 19:20:50 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:24.504 19:20:50 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:24.504 19:20:50 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:05:24.504 19:20:50 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:24.504 19:20:50 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:24.504 19:20:50 -- common/autotest_common.sh@1576 -- # device=0xa80a 00:05:24.504 19:20:50 -- common/autotest_common.sh@1577 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:24.504 19:20:50 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:05:24.504 19:20:50 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:05:24.504 19:20:50 -- common/autotest_common.sh@1589 -- # return 0 00:05:24.504 19:20:50 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:24.504 19:20:50 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:24.504 19:20:50 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:24.504 19:20:50 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:24.504 19:20:50 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:24.504 19:20:50 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:24.504 19:20:50 -- common/autotest_common.sh@10 -- # set +x 00:05:24.504 19:20:50 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:24.504 19:20:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.504 19:20:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.504 19:20:50 -- common/autotest_common.sh@10 -- # set +x 00:05:24.504 ************************************ 00:05:24.504 START TEST env 00:05:24.504 ************************************ 00:05:24.504 19:20:50 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:24.504 * Looking for test storage... 00:05:24.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:24.504 19:20:50 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:24.504 19:20:50 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.504 19:20:50 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.504 19:20:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.765 ************************************ 00:05:24.765 START TEST env_memory 00:05:24.765 ************************************ 00:05:24.765 19:20:50 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:24.765 00:05:24.765 00:05:24.765 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.765 http://cunit.sourceforge.net/ 00:05:24.765 00:05:24.765 00:05:24.765 Suite: memory 00:05:24.765 Test: alloc and free memory map ...[2024-05-15 19:20:50.761588] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:24.765 passed 00:05:24.765 Test: mem map translation ...[2024-05-15 19:20:50.787201] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:24.765 [2024-05-15 19:20:50.787232] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:24.765 [2024-05-15 19:20:50.787279] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:24.765 [2024-05-15 19:20:50.787288] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:24.765 passed 00:05:24.765 Test: mem map registration ...[2024-05-15 19:20:50.842476] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:24.766 [2024-05-15 19:20:50.842499] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:24.766 passed 00:05:24.766 Test: mem map adjacent registrations ...passed 00:05:24.766 00:05:24.766 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.766 suites 1 1 n/a 0 0 00:05:24.766 tests 4 4 4 0 0 00:05:24.766 asserts 152 152 152 0 n/a 00:05:24.766 00:05:24.766 Elapsed time = 0.195 seconds 00:05:24.766 00:05:24.766 real 0m0.208s 00:05:24.766 user 0m0.198s 00:05:24.766 sys 0m0.009s 00:05:24.766 19:20:50 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.766 19:20:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:24.766 ************************************ 00:05:24.766 END TEST env_memory 00:05:24.766 ************************************ 00:05:25.027 19:20:50 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:25.027 19:20:50 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.027 19:20:50 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.027 19:20:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.027 ************************************ 00:05:25.027 START TEST env_vtophys 00:05:25.027 ************************************ 00:05:25.027 19:20:50 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:25.027 EAL: lib.eal log level changed from notice to debug 00:05:25.027 EAL: Detected lcore 0 as core 0 on socket 0 00:05:25.027 EAL: Detected lcore 1 as core 1 on socket 0 00:05:25.027 EAL: Detected lcore 2 as core 2 on socket 0 00:05:25.027 EAL: Detected lcore 3 as core 3 on socket 0 00:05:25.027 EAL: Detected lcore 4 as core 4 on socket 0 00:05:25.027 EAL: Detected lcore 5 as core 5 on socket 0 00:05:25.027 EAL: Detected lcore 6 as core 6 on socket 0 00:05:25.027 EAL: Detected lcore 7 as core 7 on socket 0 00:05:25.027 EAL: Detected lcore 8 as core 8 on socket 0 00:05:25.027 EAL: Detected lcore 9 as core 9 on socket 0 00:05:25.027 EAL: Detected lcore 10 as core 10 on socket 0 00:05:25.028 EAL: Detected lcore 11 as core 11 on socket 0 00:05:25.028 EAL: Detected lcore 12 as core 12 on socket 0 00:05:25.028 EAL: Detected lcore 13 as core 13 on socket 0 00:05:25.028 EAL: Detected lcore 14 as core 14 on socket 0 00:05:25.028 EAL: Detected lcore 15 as core 15 on socket 0 00:05:25.028 EAL: Detected lcore 16 as core 16 on socket 0 00:05:25.028 EAL: Detected lcore 17 as core 17 on socket 0 00:05:25.028 EAL: Detected lcore 18 as core 18 on socket 0 00:05:25.028 EAL: Detected lcore 19 as core 19 on socket 0 00:05:25.028 EAL: Detected lcore 20 as core 20 on socket 0 00:05:25.028 EAL: Detected lcore 21 as core 21 on socket 0 00:05:25.028 EAL: Detected lcore 22 as core 22 on socket 0 00:05:25.028 EAL: Detected lcore 23 as core 23 on socket 0 00:05:25.028 EAL: Detected lcore 24 as core 24 on socket 0 00:05:25.028 EAL: Detected lcore 25 as core 25 on socket 0 00:05:25.028 EAL: Detected lcore 26 as core 26 on socket 0 00:05:25.028 EAL: Detected lcore 27 as core 27 on socket 0 00:05:25.028 EAL: Detected lcore 28 as core 28 on socket 0 00:05:25.028 EAL: Detected lcore 29 as core 29 on socket 0 00:05:25.028 EAL: Detected lcore 30 as core 30 on socket 0 00:05:25.028 EAL: Detected lcore 31 as core 31 on socket 0 00:05:25.028 EAL: Detected lcore 32 as core 32 on socket 0 00:05:25.028 EAL: Detected lcore 33 as core 33 on socket 0 00:05:25.028 EAL: Detected lcore 34 as core 34 on socket 0 00:05:25.028 EAL: Detected lcore 35 as core 35 on socket 0 00:05:25.028 EAL: Detected lcore 36 as core 0 on socket 1 00:05:25.028 EAL: Detected lcore 37 as core 1 on socket 1 00:05:25.028 EAL: Detected lcore 38 as core 2 on socket 1 00:05:25.028 EAL: Detected lcore 39 as core 3 on socket 1 00:05:25.028 EAL: Detected lcore 40 as core 4 on socket 1 00:05:25.028 EAL: Detected lcore 41 as core 5 on socket 1 00:05:25.028 EAL: Detected lcore 42 as core 6 on socket 1 00:05:25.028 EAL: Detected lcore 43 as core 7 on socket 1 00:05:25.028 EAL: Detected lcore 44 as core 8 on socket 1 00:05:25.028 EAL: Detected lcore 45 as core 9 on socket 1 00:05:25.028 EAL: Detected lcore 46 as core 10 on socket 1 00:05:25.028 EAL: Detected lcore 47 as core 11 on socket 1 00:05:25.028 EAL: Detected lcore 48 as core 12 on socket 1 00:05:25.028 EAL: Detected lcore 49 as core 13 on socket 1 00:05:25.028 EAL: Detected lcore 50 as core 14 on socket 1 00:05:25.028 EAL: Detected lcore 51 as core 15 on socket 1 00:05:25.028 EAL: Detected lcore 52 as core 16 on socket 1 00:05:25.028 EAL: Detected lcore 53 as core 17 on socket 1 00:05:25.028 EAL: Detected lcore 54 as core 18 on socket 1 00:05:25.028 EAL: Detected lcore 55 as core 19 on socket 1 00:05:25.028 EAL: Detected lcore 56 as core 20 on socket 1 00:05:25.028 EAL: Detected lcore 57 as core 21 on socket 1 00:05:25.028 EAL: Detected lcore 58 as core 22 on socket 1 00:05:25.028 EAL: Detected lcore 59 as core 23 on socket 1 00:05:25.028 EAL: Detected lcore 60 as core 24 on socket 1 00:05:25.028 EAL: Detected lcore 61 as core 25 on socket 1 00:05:25.028 EAL: Detected lcore 62 as core 26 on socket 1 00:05:25.028 EAL: Detected lcore 63 as core 27 on socket 1 00:05:25.028 EAL: Detected lcore 64 as core 28 on socket 1 00:05:25.028 EAL: Detected lcore 65 as core 29 on socket 1 00:05:25.028 EAL: Detected lcore 66 as core 30 on socket 1 00:05:25.028 EAL: Detected lcore 67 as core 31 on socket 1 00:05:25.028 EAL: Detected lcore 68 as core 32 on socket 1 00:05:25.028 EAL: Detected lcore 69 as core 33 on socket 1 00:05:25.028 EAL: Detected lcore 70 as core 34 on socket 1 00:05:25.028 EAL: Detected lcore 71 as core 35 on socket 1 00:05:25.028 EAL: Detected lcore 72 as core 0 on socket 0 00:05:25.028 EAL: Detected lcore 73 as core 1 on socket 0 00:05:25.028 EAL: Detected lcore 74 as core 2 on socket 0 00:05:25.028 EAL: Detected lcore 75 as core 3 on socket 0 00:05:25.028 EAL: Detected lcore 76 as core 4 on socket 0 00:05:25.028 EAL: Detected lcore 77 as core 5 on socket 0 00:05:25.028 EAL: Detected lcore 78 as core 6 on socket 0 00:05:25.028 EAL: Detected lcore 79 as core 7 on socket 0 00:05:25.028 EAL: Detected lcore 80 as core 8 on socket 0 00:05:25.028 EAL: Detected lcore 81 as core 9 on socket 0 00:05:25.028 EAL: Detected lcore 82 as core 10 on socket 0 00:05:25.028 EAL: Detected lcore 83 as core 11 on socket 0 00:05:25.028 EAL: Detected lcore 84 as core 12 on socket 0 00:05:25.028 EAL: Detected lcore 85 as core 13 on socket 0 00:05:25.028 EAL: Detected lcore 86 as core 14 on socket 0 00:05:25.028 EAL: Detected lcore 87 as core 15 on socket 0 00:05:25.028 EAL: Detected lcore 88 as core 16 on socket 0 00:05:25.028 EAL: Detected lcore 89 as core 17 on socket 0 00:05:25.028 EAL: Detected lcore 90 as core 18 on socket 0 00:05:25.028 EAL: Detected lcore 91 as core 19 on socket 0 00:05:25.028 EAL: Detected lcore 92 as core 20 on socket 0 00:05:25.028 EAL: Detected lcore 93 as core 21 on socket 0 00:05:25.028 EAL: Detected lcore 94 as core 22 on socket 0 00:05:25.028 EAL: Detected lcore 95 as core 23 on socket 0 00:05:25.028 EAL: Detected lcore 96 as core 24 on socket 0 00:05:25.028 EAL: Detected lcore 97 as core 25 on socket 0 00:05:25.028 EAL: Detected lcore 98 as core 26 on socket 0 00:05:25.028 EAL: Detected lcore 99 as core 27 on socket 0 00:05:25.028 EAL: Detected lcore 100 as core 28 on socket 0 00:05:25.028 EAL: Detected lcore 101 as core 29 on socket 0 00:05:25.028 EAL: Detected lcore 102 as core 30 on socket 0 00:05:25.028 EAL: Detected lcore 103 as core 31 on socket 0 00:05:25.028 EAL: Detected lcore 104 as core 32 on socket 0 00:05:25.028 EAL: Detected lcore 105 as core 33 on socket 0 00:05:25.028 EAL: Detected lcore 106 as core 34 on socket 0 00:05:25.028 EAL: Detected lcore 107 as core 35 on socket 0 00:05:25.028 EAL: Detected lcore 108 as core 0 on socket 1 00:05:25.028 EAL: Detected lcore 109 as core 1 on socket 1 00:05:25.028 EAL: Detected lcore 110 as core 2 on socket 1 00:05:25.028 EAL: Detected lcore 111 as core 3 on socket 1 00:05:25.028 EAL: Detected lcore 112 as core 4 on socket 1 00:05:25.028 EAL: Detected lcore 113 as core 5 on socket 1 00:05:25.028 EAL: Detected lcore 114 as core 6 on socket 1 00:05:25.028 EAL: Detected lcore 115 as core 7 on socket 1 00:05:25.028 EAL: Detected lcore 116 as core 8 on socket 1 00:05:25.028 EAL: Detected lcore 117 as core 9 on socket 1 00:05:25.028 EAL: Detected lcore 118 as core 10 on socket 1 00:05:25.028 EAL: Detected lcore 119 as core 11 on socket 1 00:05:25.028 EAL: Detected lcore 120 as core 12 on socket 1 00:05:25.028 EAL: Detected lcore 121 as core 13 on socket 1 00:05:25.028 EAL: Detected lcore 122 as core 14 on socket 1 00:05:25.028 EAL: Detected lcore 123 as core 15 on socket 1 00:05:25.028 EAL: Detected lcore 124 as core 16 on socket 1 00:05:25.028 EAL: Detected lcore 125 as core 17 on socket 1 00:05:25.029 EAL: Detected lcore 126 as core 18 on socket 1 00:05:25.029 EAL: Detected lcore 127 as core 19 on socket 1 00:05:25.029 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:25.029 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:25.029 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:25.029 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:25.029 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:25.029 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:25.029 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:25.029 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:25.029 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:25.029 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:25.029 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:25.029 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:25.029 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:25.029 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:25.029 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:25.029 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:25.029 EAL: Maximum logical cores by configuration: 128 00:05:25.029 EAL: Detected CPU lcores: 128 00:05:25.029 EAL: Detected NUMA nodes: 2 00:05:25.029 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:25.029 EAL: Detected shared linkage of DPDK 00:05:25.029 EAL: No shared files mode enabled, IPC will be disabled 00:05:25.029 EAL: Bus pci wants IOVA as 'DC' 00:05:25.029 EAL: Buses did not request a specific IOVA mode. 00:05:25.029 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:25.029 EAL: Selected IOVA mode 'VA' 00:05:25.029 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.029 EAL: Probing VFIO support... 00:05:25.029 EAL: IOMMU type 1 (Type 1) is supported 00:05:25.029 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:25.029 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:25.029 EAL: VFIO support initialized 00:05:25.029 EAL: Ask a virtual area of 0x2e000 bytes 00:05:25.029 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:25.029 EAL: Setting up physically contiguous memory... 00:05:25.029 EAL: Setting maximum number of open files to 524288 00:05:25.029 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:25.029 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:25.029 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:25.029 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.029 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:25.029 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.029 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.029 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:25.029 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:25.029 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.029 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:25.029 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.029 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.029 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:25.029 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:25.029 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.029 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:25.029 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.029 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.029 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:25.029 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:25.029 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.029 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:25.029 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.029 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.029 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:25.029 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:25.029 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:25.029 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.029 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:25.029 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:25.029 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.029 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:25.029 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:25.029 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.029 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:25.029 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:25.029 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.029 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:25.029 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:25.029 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.029 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:25.029 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:25.029 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.029 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:25.029 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:25.029 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.029 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:25.029 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:25.029 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.029 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:25.029 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:25.029 EAL: Hugepages will be freed exactly as allocated. 00:05:25.029 EAL: No shared files mode enabled, IPC is disabled 00:05:25.029 EAL: No shared files mode enabled, IPC is disabled 00:05:25.029 EAL: TSC frequency is ~2400000 KHz 00:05:25.029 EAL: Main lcore 0 is ready (tid=7f2bd0f96a00;cpuset=[0]) 00:05:25.029 EAL: Trying to obtain current memory policy. 00:05:25.029 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.029 EAL: Restoring previous memory policy: 0 00:05:25.029 EAL: request: mp_malloc_sync 00:05:25.029 EAL: No shared files mode enabled, IPC is disabled 00:05:25.029 EAL: Heap on socket 0 was expanded by 2MB 00:05:25.029 EAL: No shared files mode enabled, IPC is disabled 00:05:25.029 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:25.029 EAL: Mem event callback 'spdk:(nil)' registered 00:05:25.029 00:05:25.029 00:05:25.029 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.029 http://cunit.sourceforge.net/ 00:05:25.029 00:05:25.029 00:05:25.029 Suite: components_suite 00:05:25.029 Test: vtophys_malloc_test ...passed 00:05:25.029 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:25.029 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.029 EAL: Restoring previous memory policy: 4 00:05:25.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.029 EAL: request: mp_malloc_sync 00:05:25.030 EAL: No shared files mode enabled, IPC is disabled 00:05:25.030 EAL: Heap on socket 0 was expanded by 4MB 00:05:25.030 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.030 EAL: request: mp_malloc_sync 00:05:25.030 EAL: No shared files mode enabled, IPC is disabled 00:05:25.030 EAL: Heap on socket 0 was shrunk by 4MB 00:05:25.030 EAL: Trying to obtain current memory policy. 00:05:25.030 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.030 EAL: Restoring previous memory policy: 4 00:05:25.030 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.030 EAL: request: mp_malloc_sync 00:05:25.030 EAL: No shared files mode enabled, IPC is disabled 00:05:25.030 EAL: Heap on socket 0 was expanded by 6MB 00:05:25.030 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.030 EAL: request: mp_malloc_sync 00:05:25.030 EAL: No shared files mode enabled, IPC is disabled 00:05:25.030 EAL: Heap on socket 0 was shrunk by 6MB 00:05:25.030 EAL: Trying to obtain current memory policy. 00:05:25.030 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.030 EAL: Restoring previous memory policy: 4 00:05:25.030 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.030 EAL: request: mp_malloc_sync 00:05:25.030 EAL: No shared files mode enabled, IPC is disabled 00:05:25.030 EAL: Heap on socket 0 was expanded by 10MB 00:05:25.030 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.030 EAL: request: mp_malloc_sync 00:05:25.030 EAL: No shared files mode enabled, IPC is disabled 00:05:25.030 EAL: Heap on socket 0 was shrunk by 10MB 00:05:25.030 EAL: Trying to obtain current memory policy. 00:05:25.030 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.030 EAL: Restoring previous memory policy: 4 00:05:25.030 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.030 EAL: request: mp_malloc_sync 00:05:25.030 EAL: No shared files mode enabled, IPC is disabled 00:05:25.030 EAL: Heap on socket 0 was expanded by 18MB 00:05:25.030 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.030 EAL: request: mp_malloc_sync 00:05:25.030 EAL: No shared files mode enabled, IPC is disabled 00:05:25.030 EAL: Heap on socket 0 was shrunk by 18MB 00:05:25.030 EAL: Trying to obtain current memory policy. 00:05:25.030 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.030 EAL: Restoring previous memory policy: 4 00:05:25.030 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.030 EAL: request: mp_malloc_sync 00:05:25.030 EAL: No shared files mode enabled, IPC is disabled 00:05:25.030 EAL: Heap on socket 0 was expanded by 34MB 00:05:25.030 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.030 EAL: request: mp_malloc_sync 00:05:25.030 EAL: No shared files mode enabled, IPC is disabled 00:05:25.030 EAL: Heap on socket 0 was shrunk by 34MB 00:05:25.030 EAL: Trying to obtain current memory policy. 00:05:25.030 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.030 EAL: Restoring previous memory policy: 4 00:05:25.030 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.030 EAL: request: mp_malloc_sync 00:05:25.030 EAL: No shared files mode enabled, IPC is disabled 00:05:25.030 EAL: Heap on socket 0 was expanded by 66MB 00:05:25.030 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.030 EAL: request: mp_malloc_sync 00:05:25.030 EAL: No shared files mode enabled, IPC is disabled 00:05:25.030 EAL: Heap on socket 0 was shrunk by 66MB 00:05:25.030 EAL: Trying to obtain current memory policy. 00:05:25.030 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.030 EAL: Restoring previous memory policy: 4 00:05:25.030 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.030 EAL: request: mp_malloc_sync 00:05:25.030 EAL: No shared files mode enabled, IPC is disabled 00:05:25.030 EAL: Heap on socket 0 was expanded by 130MB 00:05:25.030 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.030 EAL: request: mp_malloc_sync 00:05:25.030 EAL: No shared files mode enabled, IPC is disabled 00:05:25.030 EAL: Heap on socket 0 was shrunk by 130MB 00:05:25.030 EAL: Trying to obtain current memory policy. 00:05:25.030 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.291 EAL: Restoring previous memory policy: 4 00:05:25.291 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.291 EAL: request: mp_malloc_sync 00:05:25.291 EAL: No shared files mode enabled, IPC is disabled 00:05:25.291 EAL: Heap on socket 0 was expanded by 258MB 00:05:25.291 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.291 EAL: request: mp_malloc_sync 00:05:25.291 EAL: No shared files mode enabled, IPC is disabled 00:05:25.291 EAL: Heap on socket 0 was shrunk by 258MB 00:05:25.291 EAL: Trying to obtain current memory policy. 00:05:25.291 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.291 EAL: Restoring previous memory policy: 4 00:05:25.291 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.291 EAL: request: mp_malloc_sync 00:05:25.291 EAL: No shared files mode enabled, IPC is disabled 00:05:25.291 EAL: Heap on socket 0 was expanded by 514MB 00:05:25.291 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.291 EAL: request: mp_malloc_sync 00:05:25.291 EAL: No shared files mode enabled, IPC is disabled 00:05:25.291 EAL: Heap on socket 0 was shrunk by 514MB 00:05:25.291 EAL: Trying to obtain current memory policy. 00:05:25.291 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.551 EAL: Restoring previous memory policy: 4 00:05:25.551 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.551 EAL: request: mp_malloc_sync 00:05:25.551 EAL: No shared files mode enabled, IPC is disabled 00:05:25.551 EAL: Heap on socket 0 was expanded by 1026MB 00:05:25.551 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.812 EAL: request: mp_malloc_sync 00:05:25.812 EAL: No shared files mode enabled, IPC is disabled 00:05:25.812 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:25.812 passed 00:05:25.812 00:05:25.812 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.812 suites 1 1 n/a 0 0 00:05:25.812 tests 2 2 2 0 0 00:05:25.812 asserts 497 497 497 0 n/a 00:05:25.812 00:05:25.812 Elapsed time = 0.679 seconds 00:05:25.812 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.812 EAL: request: mp_malloc_sync 00:05:25.812 EAL: No shared files mode enabled, IPC is disabled 00:05:25.812 EAL: Heap on socket 0 was shrunk by 2MB 00:05:25.812 EAL: No shared files mode enabled, IPC is disabled 00:05:25.812 EAL: No shared files mode enabled, IPC is disabled 00:05:25.812 EAL: No shared files mode enabled, IPC is disabled 00:05:25.812 00:05:25.812 real 0m0.825s 00:05:25.812 user 0m0.435s 00:05:25.812 sys 0m0.359s 00:05:25.812 19:20:51 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.812 19:20:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:25.812 ************************************ 00:05:25.812 END TEST env_vtophys 00:05:25.812 ************************************ 00:05:25.812 19:20:51 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:25.812 19:20:51 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.812 19:20:51 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.812 19:20:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.812 ************************************ 00:05:25.812 START TEST env_pci 00:05:25.812 ************************************ 00:05:25.812 19:20:51 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:25.812 00:05:25.812 00:05:25.812 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.812 http://cunit.sourceforge.net/ 00:05:25.812 00:05:25.812 00:05:25.812 Suite: pci 00:05:25.812 Test: pci_hook ...[2024-05-15 19:20:51.912501] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3361634 has claimed it 00:05:25.812 EAL: Cannot find device (10000:00:01.0) 00:05:25.812 EAL: Failed to attach device on primary process 00:05:25.812 passed 00:05:25.812 00:05:25.812 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.812 suites 1 1 n/a 0 0 00:05:25.812 tests 1 1 1 0 0 00:05:25.812 asserts 25 25 25 0 n/a 00:05:25.812 00:05:25.812 Elapsed time = 0.034 seconds 00:05:25.812 00:05:25.812 real 0m0.055s 00:05:25.812 user 0m0.016s 00:05:25.812 sys 0m0.039s 00:05:25.813 19:20:51 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.813 19:20:51 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:25.813 ************************************ 00:05:25.813 END TEST env_pci 00:05:25.813 ************************************ 00:05:25.813 19:20:51 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:25.813 19:20:51 env -- env/env.sh@15 -- # uname 00:05:25.813 19:20:51 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:25.813 19:20:51 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:25.813 19:20:51 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:25.813 19:20:51 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:25.813 19:20:51 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.813 19:20:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.074 ************************************ 00:05:26.074 START TEST env_dpdk_post_init 00:05:26.074 ************************************ 00:05:26.074 19:20:52 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:26.074 EAL: Detected CPU lcores: 128 00:05:26.074 EAL: Detected NUMA nodes: 2 00:05:26.074 EAL: Detected shared linkage of DPDK 00:05:26.074 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:26.074 EAL: Selected IOVA mode 'VA' 00:05:26.074 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.074 EAL: VFIO support initialized 00:05:26.074 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:26.074 EAL: Using IOMMU type 1 (Type 1) 00:05:26.335 EAL: Ignore mapping IO port bar(1) 00:05:26.335 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:26.596 EAL: Ignore mapping IO port bar(1) 00:05:26.596 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:26.596 EAL: Ignore mapping IO port bar(1) 00:05:26.859 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:26.859 EAL: Ignore mapping IO port bar(1) 00:05:27.120 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:27.120 EAL: Ignore mapping IO port bar(1) 00:05:27.120 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:27.381 EAL: Ignore mapping IO port bar(1) 00:05:27.381 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:27.641 EAL: Ignore mapping IO port bar(1) 00:05:27.641 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:27.902 EAL: Ignore mapping IO port bar(1) 00:05:27.902 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:28.162 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:28.162 EAL: Ignore mapping IO port bar(1) 00:05:28.422 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:28.422 EAL: Ignore mapping IO port bar(1) 00:05:28.683 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:28.683 EAL: Ignore mapping IO port bar(1) 00:05:28.942 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:28.942 EAL: Ignore mapping IO port bar(1) 00:05:28.942 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:29.203 EAL: Ignore mapping IO port bar(1) 00:05:29.203 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:29.464 EAL: Ignore mapping IO port bar(1) 00:05:29.464 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:29.725 EAL: Ignore mapping IO port bar(1) 00:05:29.725 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:29.725 EAL: Ignore mapping IO port bar(1) 00:05:29.985 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:29.985 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:29.985 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:29.985 Starting DPDK initialization... 00:05:29.985 Starting SPDK post initialization... 00:05:29.985 SPDK NVMe probe 00:05:29.985 Attaching to 0000:65:00.0 00:05:29.985 Attached to 0000:65:00.0 00:05:29.985 Cleaning up... 00:05:31.901 00:05:31.901 real 0m5.747s 00:05:31.901 user 0m0.199s 00:05:31.901 sys 0m0.100s 00:05:31.901 19:20:57 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.901 19:20:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.901 ************************************ 00:05:31.901 END TEST env_dpdk_post_init 00:05:31.901 ************************************ 00:05:31.901 19:20:57 env -- env/env.sh@26 -- # uname 00:05:31.901 19:20:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:31.901 19:20:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:31.901 19:20:57 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.901 19:20:57 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.901 19:20:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.901 ************************************ 00:05:31.901 START TEST env_mem_callbacks 00:05:31.901 ************************************ 00:05:31.901 19:20:57 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:31.901 EAL: Detected CPU lcores: 128 00:05:31.901 EAL: Detected NUMA nodes: 2 00:05:31.901 EAL: Detected shared linkage of DPDK 00:05:31.901 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:31.901 EAL: Selected IOVA mode 'VA' 00:05:31.901 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.901 EAL: VFIO support initialized 00:05:31.901 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:31.901 00:05:31.901 00:05:31.901 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.901 http://cunit.sourceforge.net/ 00:05:31.901 00:05:31.901 00:05:31.901 Suite: memory 00:05:31.901 Test: test ... 00:05:31.901 register 0x200000200000 2097152 00:05:31.901 malloc 3145728 00:05:31.901 register 0x200000400000 4194304 00:05:31.901 buf 0x200000500000 len 3145728 PASSED 00:05:31.901 malloc 64 00:05:31.901 buf 0x2000004fff40 len 64 PASSED 00:05:31.901 malloc 4194304 00:05:31.901 register 0x200000800000 6291456 00:05:31.901 buf 0x200000a00000 len 4194304 PASSED 00:05:31.901 free 0x200000500000 3145728 00:05:31.901 free 0x2000004fff40 64 00:05:31.901 unregister 0x200000400000 4194304 PASSED 00:05:31.901 free 0x200000a00000 4194304 00:05:31.901 unregister 0x200000800000 6291456 PASSED 00:05:31.901 malloc 8388608 00:05:31.901 register 0x200000400000 10485760 00:05:31.901 buf 0x200000600000 len 8388608 PASSED 00:05:31.901 free 0x200000600000 8388608 00:05:31.901 unregister 0x200000400000 10485760 PASSED 00:05:31.901 passed 00:05:31.901 00:05:31.901 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.901 suites 1 1 n/a 0 0 00:05:31.901 tests 1 1 1 0 0 00:05:31.901 asserts 15 15 15 0 n/a 00:05:31.901 00:05:31.901 Elapsed time = 0.010 seconds 00:05:31.901 00:05:31.901 real 0m0.073s 00:05:31.901 user 0m0.024s 00:05:31.901 sys 0m0.048s 00:05:31.901 19:20:57 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.901 19:20:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:31.901 ************************************ 00:05:31.901 END TEST env_mem_callbacks 00:05:31.901 ************************************ 00:05:31.901 00:05:31.901 real 0m7.425s 00:05:31.901 user 0m1.072s 00:05:31.901 sys 0m0.888s 00:05:31.901 19:20:57 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.901 19:20:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.901 ************************************ 00:05:31.901 END TEST env 00:05:31.901 ************************************ 00:05:31.901 19:20:58 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:31.901 19:20:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.901 19:20:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.901 19:20:58 -- common/autotest_common.sh@10 -- # set +x 00:05:31.901 ************************************ 00:05:31.901 START TEST rpc 00:05:31.901 ************************************ 00:05:31.901 19:20:58 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:32.162 * Looking for test storage... 00:05:32.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:32.162 19:20:58 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:32.162 19:20:58 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3362929 00:05:32.162 19:20:58 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.162 19:20:58 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3362929 00:05:32.162 19:20:58 rpc -- common/autotest_common.sh@827 -- # '[' -z 3362929 ']' 00:05:32.162 19:20:58 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.162 19:20:58 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:32.162 19:20:58 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.163 19:20:58 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:32.163 19:20:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.163 [2024-05-15 19:20:58.220330] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:05:32.163 [2024-05-15 19:20:58.220387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3362929 ] 00:05:32.163 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.163 [2024-05-15 19:20:58.307254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.423 [2024-05-15 19:20:58.399392] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:32.423 [2024-05-15 19:20:58.399451] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3362929' to capture a snapshot of events at runtime. 00:05:32.423 [2024-05-15 19:20:58.399461] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:32.423 [2024-05-15 19:20:58.399468] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:32.423 [2024-05-15 19:20:58.399475] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3362929 for offline analysis/debug. 00:05:32.423 [2024-05-15 19:20:58.399501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.994 19:20:59 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:32.994 19:20:59 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:32.995 19:20:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:32.995 19:20:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:32.995 19:20:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:32.995 19:20:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:32.995 19:20:59 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:32.995 19:20:59 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.995 19:20:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.995 ************************************ 00:05:32.995 START TEST rpc_integrity 00:05:32.995 ************************************ 00:05:32.995 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:32.995 19:20:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:32.995 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.995 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.995 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.995 19:20:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:32.995 19:20:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:33.255 19:20:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:33.255 19:20:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:33.255 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.255 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.255 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.255 19:20:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:33.255 19:20:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:33.255 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.255 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.255 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.255 19:20:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:33.255 { 00:05:33.255 "name": "Malloc0", 00:05:33.255 "aliases": [ 00:05:33.255 "18ae5b03-a275-48b6-83f2-92a12cee5ac0" 00:05:33.255 ], 00:05:33.255 "product_name": "Malloc disk", 00:05:33.255 "block_size": 512, 00:05:33.255 "num_blocks": 16384, 00:05:33.255 "uuid": "18ae5b03-a275-48b6-83f2-92a12cee5ac0", 00:05:33.255 "assigned_rate_limits": { 00:05:33.255 "rw_ios_per_sec": 0, 00:05:33.255 "rw_mbytes_per_sec": 0, 00:05:33.255 "r_mbytes_per_sec": 0, 00:05:33.255 "w_mbytes_per_sec": 0 00:05:33.255 }, 00:05:33.255 "claimed": false, 00:05:33.255 "zoned": false, 00:05:33.255 "supported_io_types": { 00:05:33.256 "read": true, 00:05:33.256 "write": true, 00:05:33.256 "unmap": true, 00:05:33.256 "write_zeroes": true, 00:05:33.256 "flush": true, 00:05:33.256 "reset": true, 00:05:33.256 "compare": false, 00:05:33.256 "compare_and_write": false, 00:05:33.256 "abort": true, 00:05:33.256 "nvme_admin": false, 00:05:33.256 "nvme_io": false 00:05:33.256 }, 00:05:33.256 "memory_domains": [ 00:05:33.256 { 00:05:33.256 "dma_device_id": "system", 00:05:33.256 "dma_device_type": 1 00:05:33.256 }, 00:05:33.256 { 00:05:33.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.256 "dma_device_type": 2 00:05:33.256 } 00:05:33.256 ], 00:05:33.256 "driver_specific": {} 00:05:33.256 } 00:05:33.256 ]' 00:05:33.256 19:20:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:33.256 19:20:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:33.256 19:20:59 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:33.256 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.256 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.256 [2024-05-15 19:20:59.294607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:33.256 [2024-05-15 19:20:59.294650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:33.256 [2024-05-15 19:20:59.294665] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c7d8d0 00:05:33.256 [2024-05-15 19:20:59.294673] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:33.256 [2024-05-15 19:20:59.296150] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:33.256 [2024-05-15 19:20:59.296186] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:33.256 Passthru0 00:05:33.256 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.256 19:20:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:33.256 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.256 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.256 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.256 19:20:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:33.256 { 00:05:33.256 "name": "Malloc0", 00:05:33.256 "aliases": [ 00:05:33.256 "18ae5b03-a275-48b6-83f2-92a12cee5ac0" 00:05:33.256 ], 00:05:33.256 "product_name": "Malloc disk", 00:05:33.256 "block_size": 512, 00:05:33.256 "num_blocks": 16384, 00:05:33.256 "uuid": "18ae5b03-a275-48b6-83f2-92a12cee5ac0", 00:05:33.256 "assigned_rate_limits": { 00:05:33.256 "rw_ios_per_sec": 0, 00:05:33.256 "rw_mbytes_per_sec": 0, 00:05:33.256 "r_mbytes_per_sec": 0, 00:05:33.256 "w_mbytes_per_sec": 0 00:05:33.256 }, 00:05:33.256 "claimed": true, 00:05:33.256 "claim_type": "exclusive_write", 00:05:33.256 "zoned": false, 00:05:33.256 "supported_io_types": { 00:05:33.256 "read": true, 00:05:33.256 "write": true, 00:05:33.256 "unmap": true, 00:05:33.256 "write_zeroes": true, 00:05:33.256 "flush": true, 00:05:33.256 "reset": true, 00:05:33.256 "compare": false, 00:05:33.256 "compare_and_write": false, 00:05:33.256 "abort": true, 00:05:33.256 "nvme_admin": false, 00:05:33.256 "nvme_io": false 00:05:33.256 }, 00:05:33.256 "memory_domains": [ 00:05:33.256 { 00:05:33.256 "dma_device_id": "system", 00:05:33.256 "dma_device_type": 1 00:05:33.256 }, 00:05:33.256 { 00:05:33.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.256 "dma_device_type": 2 00:05:33.256 } 00:05:33.256 ], 00:05:33.256 "driver_specific": {} 00:05:33.256 }, 00:05:33.256 { 00:05:33.256 "name": "Passthru0", 00:05:33.256 "aliases": [ 00:05:33.256 "25317f18-7f47-5837-89fe-e00c3f0e3fdf" 00:05:33.256 ], 00:05:33.256 "product_name": "passthru", 00:05:33.256 "block_size": 512, 00:05:33.256 "num_blocks": 16384, 00:05:33.256 "uuid": "25317f18-7f47-5837-89fe-e00c3f0e3fdf", 00:05:33.256 "assigned_rate_limits": { 00:05:33.256 "rw_ios_per_sec": 0, 00:05:33.256 "rw_mbytes_per_sec": 0, 00:05:33.256 "r_mbytes_per_sec": 0, 00:05:33.256 "w_mbytes_per_sec": 0 00:05:33.256 }, 00:05:33.256 "claimed": false, 00:05:33.256 "zoned": false, 00:05:33.256 "supported_io_types": { 00:05:33.256 "read": true, 00:05:33.256 "write": true, 00:05:33.256 "unmap": true, 00:05:33.256 "write_zeroes": true, 00:05:33.256 "flush": true, 00:05:33.256 "reset": true, 00:05:33.256 "compare": false, 00:05:33.256 "compare_and_write": false, 00:05:33.256 "abort": true, 00:05:33.256 "nvme_admin": false, 00:05:33.256 "nvme_io": false 00:05:33.256 }, 00:05:33.256 "memory_domains": [ 00:05:33.256 { 00:05:33.256 "dma_device_id": "system", 00:05:33.256 "dma_device_type": 1 00:05:33.256 }, 00:05:33.256 { 00:05:33.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.256 "dma_device_type": 2 00:05:33.256 } 00:05:33.256 ], 00:05:33.256 "driver_specific": { 00:05:33.256 "passthru": { 00:05:33.256 "name": "Passthru0", 00:05:33.256 "base_bdev_name": "Malloc0" 00:05:33.256 } 00:05:33.256 } 00:05:33.256 } 00:05:33.256 ]' 00:05:33.256 19:20:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:33.256 19:20:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:33.256 19:20:59 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:33.256 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.256 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.256 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.256 19:20:59 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:33.256 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.256 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.256 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.256 19:20:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:33.256 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.256 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.256 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.256 19:20:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:33.256 19:20:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:33.518 19:20:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:33.518 00:05:33.518 real 0m0.298s 00:05:33.518 user 0m0.188s 00:05:33.518 sys 0m0.041s 00:05:33.518 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.518 19:20:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.518 ************************************ 00:05:33.518 END TEST rpc_integrity 00:05:33.518 ************************************ 00:05:33.518 19:20:59 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:33.518 19:20:59 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.518 19:20:59 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.518 19:20:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.518 ************************************ 00:05:33.518 START TEST rpc_plugins 00:05:33.518 ************************************ 00:05:33.518 19:20:59 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:33.518 19:20:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:33.518 19:20:59 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.518 19:20:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.518 19:20:59 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.518 19:20:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:33.518 19:20:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:33.518 19:20:59 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.518 19:20:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.518 19:20:59 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.518 19:20:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:33.518 { 00:05:33.518 "name": "Malloc1", 00:05:33.518 "aliases": [ 00:05:33.518 "82c73102-5f83-48c3-a3ba-cb34bb4fcced" 00:05:33.518 ], 00:05:33.518 "product_name": "Malloc disk", 00:05:33.518 "block_size": 4096, 00:05:33.518 "num_blocks": 256, 00:05:33.518 "uuid": "82c73102-5f83-48c3-a3ba-cb34bb4fcced", 00:05:33.518 "assigned_rate_limits": { 00:05:33.518 "rw_ios_per_sec": 0, 00:05:33.518 "rw_mbytes_per_sec": 0, 00:05:33.518 "r_mbytes_per_sec": 0, 00:05:33.518 "w_mbytes_per_sec": 0 00:05:33.518 }, 00:05:33.518 "claimed": false, 00:05:33.518 "zoned": false, 00:05:33.518 "supported_io_types": { 00:05:33.518 "read": true, 00:05:33.518 "write": true, 00:05:33.518 "unmap": true, 00:05:33.518 "write_zeroes": true, 00:05:33.518 "flush": true, 00:05:33.518 "reset": true, 00:05:33.518 "compare": false, 00:05:33.518 "compare_and_write": false, 00:05:33.518 "abort": true, 00:05:33.518 "nvme_admin": false, 00:05:33.518 "nvme_io": false 00:05:33.518 }, 00:05:33.518 "memory_domains": [ 00:05:33.518 { 00:05:33.518 "dma_device_id": "system", 00:05:33.518 "dma_device_type": 1 00:05:33.518 }, 00:05:33.518 { 00:05:33.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.518 "dma_device_type": 2 00:05:33.518 } 00:05:33.518 ], 00:05:33.518 "driver_specific": {} 00:05:33.518 } 00:05:33.518 ]' 00:05:33.518 19:20:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:33.518 19:20:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:33.518 19:20:59 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:33.518 19:20:59 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.518 19:20:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.518 19:20:59 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.518 19:20:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:33.518 19:20:59 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.518 19:20:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.518 19:20:59 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.518 19:20:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:33.518 19:20:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:33.518 19:20:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:33.518 00:05:33.518 real 0m0.155s 00:05:33.518 user 0m0.098s 00:05:33.518 sys 0m0.021s 00:05:33.518 19:20:59 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.518 19:20:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.518 ************************************ 00:05:33.518 END TEST rpc_plugins 00:05:33.518 ************************************ 00:05:33.780 19:20:59 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:33.780 19:20:59 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.780 19:20:59 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.780 19:20:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.780 ************************************ 00:05:33.780 START TEST rpc_trace_cmd_test 00:05:33.780 ************************************ 00:05:33.780 19:20:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:33.780 19:20:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:33.780 19:20:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:33.780 19:20:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.780 19:20:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.780 19:20:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.780 19:20:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:33.780 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3362929", 00:05:33.780 "tpoint_group_mask": "0x8", 00:05:33.780 "iscsi_conn": { 00:05:33.780 "mask": "0x2", 00:05:33.780 "tpoint_mask": "0x0" 00:05:33.780 }, 00:05:33.780 "scsi": { 00:05:33.780 "mask": "0x4", 00:05:33.780 "tpoint_mask": "0x0" 00:05:33.780 }, 00:05:33.780 "bdev": { 00:05:33.780 "mask": "0x8", 00:05:33.780 "tpoint_mask": "0xffffffffffffffff" 00:05:33.780 }, 00:05:33.780 "nvmf_rdma": { 00:05:33.780 "mask": "0x10", 00:05:33.780 "tpoint_mask": "0x0" 00:05:33.780 }, 00:05:33.780 "nvmf_tcp": { 00:05:33.780 "mask": "0x20", 00:05:33.780 "tpoint_mask": "0x0" 00:05:33.780 }, 00:05:33.780 "ftl": { 00:05:33.780 "mask": "0x40", 00:05:33.780 "tpoint_mask": "0x0" 00:05:33.780 }, 00:05:33.780 "blobfs": { 00:05:33.780 "mask": "0x80", 00:05:33.780 "tpoint_mask": "0x0" 00:05:33.780 }, 00:05:33.780 "dsa": { 00:05:33.780 "mask": "0x200", 00:05:33.780 "tpoint_mask": "0x0" 00:05:33.780 }, 00:05:33.780 "thread": { 00:05:33.780 "mask": "0x400", 00:05:33.780 "tpoint_mask": "0x0" 00:05:33.780 }, 00:05:33.780 "nvme_pcie": { 00:05:33.780 "mask": "0x800", 00:05:33.780 "tpoint_mask": "0x0" 00:05:33.780 }, 00:05:33.780 "iaa": { 00:05:33.780 "mask": "0x1000", 00:05:33.780 "tpoint_mask": "0x0" 00:05:33.780 }, 00:05:33.780 "nvme_tcp": { 00:05:33.780 "mask": "0x2000", 00:05:33.780 "tpoint_mask": "0x0" 00:05:33.780 }, 00:05:33.780 "bdev_nvme": { 00:05:33.780 "mask": "0x4000", 00:05:33.780 "tpoint_mask": "0x0" 00:05:33.780 }, 00:05:33.781 "sock": { 00:05:33.781 "mask": "0x8000", 00:05:33.781 "tpoint_mask": "0x0" 00:05:33.781 } 00:05:33.781 }' 00:05:33.781 19:20:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:33.781 19:20:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:33.781 19:20:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:33.781 19:20:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:33.781 19:20:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:33.781 19:20:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:33.781 19:20:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:34.042 19:20:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:34.042 19:20:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:34.042 19:21:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:34.042 00:05:34.042 real 0m0.257s 00:05:34.042 user 0m0.209s 00:05:34.042 sys 0m0.037s 00:05:34.042 19:21:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.042 19:21:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:34.042 ************************************ 00:05:34.042 END TEST rpc_trace_cmd_test 00:05:34.042 ************************************ 00:05:34.042 19:21:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:34.042 19:21:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:34.042 19:21:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:34.042 19:21:00 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.042 19:21:00 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.042 19:21:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.042 ************************************ 00:05:34.042 START TEST rpc_daemon_integrity 00:05:34.042 ************************************ 00:05:34.042 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:34.042 19:21:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:34.042 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.042 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.042 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.042 19:21:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:34.042 19:21:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:34.042 19:21:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:34.042 19:21:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:34.042 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.042 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.042 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.042 19:21:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:34.042 19:21:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:34.042 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.042 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.042 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.042 19:21:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:34.042 { 00:05:34.042 "name": "Malloc2", 00:05:34.042 "aliases": [ 00:05:34.042 "aeb113fa-8ef4-4c75-98a4-46cac8d90415" 00:05:34.042 ], 00:05:34.042 "product_name": "Malloc disk", 00:05:34.042 "block_size": 512, 00:05:34.042 "num_blocks": 16384, 00:05:34.042 "uuid": "aeb113fa-8ef4-4c75-98a4-46cac8d90415", 00:05:34.042 "assigned_rate_limits": { 00:05:34.042 "rw_ios_per_sec": 0, 00:05:34.042 "rw_mbytes_per_sec": 0, 00:05:34.042 "r_mbytes_per_sec": 0, 00:05:34.042 "w_mbytes_per_sec": 0 00:05:34.042 }, 00:05:34.042 "claimed": false, 00:05:34.042 "zoned": false, 00:05:34.042 "supported_io_types": { 00:05:34.042 "read": true, 00:05:34.042 "write": true, 00:05:34.042 "unmap": true, 00:05:34.042 "write_zeroes": true, 00:05:34.042 "flush": true, 00:05:34.042 "reset": true, 00:05:34.042 "compare": false, 00:05:34.042 "compare_and_write": false, 00:05:34.042 "abort": true, 00:05:34.042 "nvme_admin": false, 00:05:34.042 "nvme_io": false 00:05:34.042 }, 00:05:34.042 "memory_domains": [ 00:05:34.042 { 00:05:34.042 "dma_device_id": "system", 00:05:34.042 "dma_device_type": 1 00:05:34.042 }, 00:05:34.042 { 00:05:34.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.042 "dma_device_type": 2 00:05:34.042 } 00:05:34.042 ], 00:05:34.042 "driver_specific": {} 00:05:34.042 } 00:05:34.042 ]' 00:05:34.042 19:21:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.304 [2024-05-15 19:21:00.273282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:34.304 [2024-05-15 19:21:00.273344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:34.304 [2024-05-15 19:21:00.273360] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d3f870 00:05:34.304 [2024-05-15 19:21:00.273367] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:34.304 [2024-05-15 19:21:00.274786] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:34.304 [2024-05-15 19:21:00.274821] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:34.304 Passthru0 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:34.304 { 00:05:34.304 "name": "Malloc2", 00:05:34.304 "aliases": [ 00:05:34.304 "aeb113fa-8ef4-4c75-98a4-46cac8d90415" 00:05:34.304 ], 00:05:34.304 "product_name": "Malloc disk", 00:05:34.304 "block_size": 512, 00:05:34.304 "num_blocks": 16384, 00:05:34.304 "uuid": "aeb113fa-8ef4-4c75-98a4-46cac8d90415", 00:05:34.304 "assigned_rate_limits": { 00:05:34.304 "rw_ios_per_sec": 0, 00:05:34.304 "rw_mbytes_per_sec": 0, 00:05:34.304 "r_mbytes_per_sec": 0, 00:05:34.304 "w_mbytes_per_sec": 0 00:05:34.304 }, 00:05:34.304 "claimed": true, 00:05:34.304 "claim_type": "exclusive_write", 00:05:34.304 "zoned": false, 00:05:34.304 "supported_io_types": { 00:05:34.304 "read": true, 00:05:34.304 "write": true, 00:05:34.304 "unmap": true, 00:05:34.304 "write_zeroes": true, 00:05:34.304 "flush": true, 00:05:34.304 "reset": true, 00:05:34.304 "compare": false, 00:05:34.304 "compare_and_write": false, 00:05:34.304 "abort": true, 00:05:34.304 "nvme_admin": false, 00:05:34.304 "nvme_io": false 00:05:34.304 }, 00:05:34.304 "memory_domains": [ 00:05:34.304 { 00:05:34.304 "dma_device_id": "system", 00:05:34.304 "dma_device_type": 1 00:05:34.304 }, 00:05:34.304 { 00:05:34.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.304 "dma_device_type": 2 00:05:34.304 } 00:05:34.304 ], 00:05:34.304 "driver_specific": {} 00:05:34.304 }, 00:05:34.304 { 00:05:34.304 "name": "Passthru0", 00:05:34.304 "aliases": [ 00:05:34.304 "d08fd133-656d-5526-a38e-a24bbd750d06" 00:05:34.304 ], 00:05:34.304 "product_name": "passthru", 00:05:34.304 "block_size": 512, 00:05:34.304 "num_blocks": 16384, 00:05:34.304 "uuid": "d08fd133-656d-5526-a38e-a24bbd750d06", 00:05:34.304 "assigned_rate_limits": { 00:05:34.304 "rw_ios_per_sec": 0, 00:05:34.304 "rw_mbytes_per_sec": 0, 00:05:34.304 "r_mbytes_per_sec": 0, 00:05:34.304 "w_mbytes_per_sec": 0 00:05:34.304 }, 00:05:34.304 "claimed": false, 00:05:34.304 "zoned": false, 00:05:34.304 "supported_io_types": { 00:05:34.304 "read": true, 00:05:34.304 "write": true, 00:05:34.304 "unmap": true, 00:05:34.304 "write_zeroes": true, 00:05:34.304 "flush": true, 00:05:34.304 "reset": true, 00:05:34.304 "compare": false, 00:05:34.304 "compare_and_write": false, 00:05:34.304 "abort": true, 00:05:34.304 "nvme_admin": false, 00:05:34.304 "nvme_io": false 00:05:34.304 }, 00:05:34.304 "memory_domains": [ 00:05:34.304 { 00:05:34.304 "dma_device_id": "system", 00:05:34.304 "dma_device_type": 1 00:05:34.304 }, 00:05:34.304 { 00:05:34.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.304 "dma_device_type": 2 00:05:34.304 } 00:05:34.304 ], 00:05:34.304 "driver_specific": { 00:05:34.304 "passthru": { 00:05:34.304 "name": "Passthru0", 00:05:34.304 "base_bdev_name": "Malloc2" 00:05:34.304 } 00:05:34.304 } 00:05:34.304 } 00:05:34.304 ]' 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:34.304 00:05:34.304 real 0m0.302s 00:05:34.304 user 0m0.191s 00:05:34.304 sys 0m0.042s 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.304 19:21:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.304 ************************************ 00:05:34.304 END TEST rpc_daemon_integrity 00:05:34.304 ************************************ 00:05:34.304 19:21:00 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:34.304 19:21:00 rpc -- rpc/rpc.sh@84 -- # killprocess 3362929 00:05:34.304 19:21:00 rpc -- common/autotest_common.sh@946 -- # '[' -z 3362929 ']' 00:05:34.304 19:21:00 rpc -- common/autotest_common.sh@950 -- # kill -0 3362929 00:05:34.304 19:21:00 rpc -- common/autotest_common.sh@951 -- # uname 00:05:34.304 19:21:00 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:34.304 19:21:00 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3362929 00:05:34.564 19:21:00 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:34.564 19:21:00 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:34.564 19:21:00 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3362929' 00:05:34.564 killing process with pid 3362929 00:05:34.564 19:21:00 rpc -- common/autotest_common.sh@965 -- # kill 3362929 00:05:34.564 19:21:00 rpc -- common/autotest_common.sh@970 -- # wait 3362929 00:05:34.825 00:05:34.825 real 0m2.698s 00:05:34.825 user 0m3.511s 00:05:34.825 sys 0m0.817s 00:05:34.825 19:21:00 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.825 19:21:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.825 ************************************ 00:05:34.825 END TEST rpc 00:05:34.825 ************************************ 00:05:34.825 19:21:00 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:34.825 19:21:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.825 19:21:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.825 19:21:00 -- common/autotest_common.sh@10 -- # set +x 00:05:34.825 ************************************ 00:05:34.825 START TEST skip_rpc 00:05:34.825 ************************************ 00:05:34.825 19:21:00 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:34.825 * Looking for test storage... 00:05:34.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:34.825 19:21:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:34.825 19:21:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:34.825 19:21:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:34.825 19:21:00 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.825 19:21:00 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.825 19:21:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.825 ************************************ 00:05:34.825 START TEST skip_rpc 00:05:34.825 ************************************ 00:05:34.825 19:21:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:34.825 19:21:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3363668 00:05:34.825 19:21:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:34.825 19:21:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.825 19:21:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:35.086 [2024-05-15 19:21:01.028362] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:05:35.086 [2024-05-15 19:21:01.028422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3363668 ] 00:05:35.086 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.086 [2024-05-15 19:21:01.104966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.086 [2024-05-15 19:21:01.198733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3363668 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3363668 ']' 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3363668 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3363668 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3363668' 00:05:40.371 killing process with pid 3363668 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3363668 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3363668 00:05:40.371 00:05:40.371 real 0m5.278s 00:05:40.371 user 0m5.056s 00:05:40.371 sys 0m0.252s 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.371 19:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.371 ************************************ 00:05:40.371 END TEST skip_rpc 00:05:40.371 ************************************ 00:05:40.371 19:21:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:40.371 19:21:06 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.372 19:21:06 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.372 19:21:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.372 ************************************ 00:05:40.372 START TEST skip_rpc_with_json 00:05:40.372 ************************************ 00:05:40.372 19:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:40.372 19:21:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:40.372 19:21:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3364754 00:05:40.372 19:21:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.372 19:21:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3364754 00:05:40.372 19:21:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.372 19:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3364754 ']' 00:05:40.372 19:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.372 19:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:40.372 19:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.372 19:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:40.372 19:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.372 [2024-05-15 19:21:06.411569] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:05:40.372 [2024-05-15 19:21:06.411620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3364754 ] 00:05:40.372 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.372 [2024-05-15 19:21:06.495685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.631 [2024-05-15 19:21:06.563897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.202 19:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:41.202 19:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:41.202 19:21:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:41.202 19:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.202 19:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.202 [2024-05-15 19:21:07.274418] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:41.202 request: 00:05:41.202 { 00:05:41.202 "trtype": "tcp", 00:05:41.202 "method": "nvmf_get_transports", 00:05:41.202 "req_id": 1 00:05:41.202 } 00:05:41.202 Got JSON-RPC error response 00:05:41.202 response: 00:05:41.202 { 00:05:41.202 "code": -19, 00:05:41.202 "message": "No such device" 00:05:41.202 } 00:05:41.202 19:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:41.202 19:21:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:41.202 19:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.202 19:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.202 [2024-05-15 19:21:07.286533] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:41.202 19:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.202 19:21:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:41.202 19:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.202 19:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.462 19:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.462 19:21:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:41.462 { 00:05:41.462 "subsystems": [ 00:05:41.462 { 00:05:41.462 "subsystem": "vfio_user_target", 00:05:41.462 "config": null 00:05:41.462 }, 00:05:41.462 { 00:05:41.462 "subsystem": "keyring", 00:05:41.462 "config": [] 00:05:41.462 }, 00:05:41.462 { 00:05:41.462 "subsystem": "iobuf", 00:05:41.462 "config": [ 00:05:41.462 { 00:05:41.462 "method": "iobuf_set_options", 00:05:41.462 "params": { 00:05:41.462 "small_pool_count": 8192, 00:05:41.462 "large_pool_count": 1024, 00:05:41.462 "small_bufsize": 8192, 00:05:41.462 "large_bufsize": 135168 00:05:41.462 } 00:05:41.462 } 00:05:41.462 ] 00:05:41.462 }, 00:05:41.462 { 00:05:41.462 "subsystem": "sock", 00:05:41.462 "config": [ 00:05:41.462 { 00:05:41.462 "method": "sock_impl_set_options", 00:05:41.462 "params": { 00:05:41.462 "impl_name": "posix", 00:05:41.462 "recv_buf_size": 2097152, 00:05:41.462 "send_buf_size": 2097152, 00:05:41.462 "enable_recv_pipe": true, 00:05:41.462 "enable_quickack": false, 00:05:41.462 "enable_placement_id": 0, 00:05:41.462 "enable_zerocopy_send_server": true, 00:05:41.462 "enable_zerocopy_send_client": false, 00:05:41.462 "zerocopy_threshold": 0, 00:05:41.462 "tls_version": 0, 00:05:41.462 "enable_ktls": false 00:05:41.462 } 00:05:41.462 }, 00:05:41.462 { 00:05:41.462 "method": "sock_impl_set_options", 00:05:41.462 "params": { 00:05:41.462 "impl_name": "ssl", 00:05:41.462 "recv_buf_size": 4096, 00:05:41.462 "send_buf_size": 4096, 00:05:41.462 "enable_recv_pipe": true, 00:05:41.462 "enable_quickack": false, 00:05:41.462 "enable_placement_id": 0, 00:05:41.462 "enable_zerocopy_send_server": true, 00:05:41.462 "enable_zerocopy_send_client": false, 00:05:41.462 "zerocopy_threshold": 0, 00:05:41.462 "tls_version": 0, 00:05:41.462 "enable_ktls": false 00:05:41.462 } 00:05:41.462 } 00:05:41.462 ] 00:05:41.462 }, 00:05:41.462 { 00:05:41.462 "subsystem": "vmd", 00:05:41.462 "config": [] 00:05:41.462 }, 00:05:41.462 { 00:05:41.462 "subsystem": "accel", 00:05:41.462 "config": [ 00:05:41.462 { 00:05:41.462 "method": "accel_set_options", 00:05:41.462 "params": { 00:05:41.462 "small_cache_size": 128, 00:05:41.462 "large_cache_size": 16, 00:05:41.462 "task_count": 2048, 00:05:41.462 "sequence_count": 2048, 00:05:41.462 "buf_count": 2048 00:05:41.462 } 00:05:41.462 } 00:05:41.462 ] 00:05:41.462 }, 00:05:41.462 { 00:05:41.462 "subsystem": "bdev", 00:05:41.462 "config": [ 00:05:41.462 { 00:05:41.462 "method": "bdev_set_options", 00:05:41.462 "params": { 00:05:41.462 "bdev_io_pool_size": 65535, 00:05:41.462 "bdev_io_cache_size": 256, 00:05:41.462 "bdev_auto_examine": true, 00:05:41.462 "iobuf_small_cache_size": 128, 00:05:41.462 "iobuf_large_cache_size": 16 00:05:41.462 } 00:05:41.462 }, 00:05:41.462 { 00:05:41.462 "method": "bdev_raid_set_options", 00:05:41.462 "params": { 00:05:41.463 "process_window_size_kb": 1024 00:05:41.463 } 00:05:41.463 }, 00:05:41.463 { 00:05:41.463 "method": "bdev_iscsi_set_options", 00:05:41.463 "params": { 00:05:41.463 "timeout_sec": 30 00:05:41.463 } 00:05:41.463 }, 00:05:41.463 { 00:05:41.463 "method": "bdev_nvme_set_options", 00:05:41.463 "params": { 00:05:41.463 "action_on_timeout": "none", 00:05:41.463 "timeout_us": 0, 00:05:41.463 "timeout_admin_us": 0, 00:05:41.463 "keep_alive_timeout_ms": 10000, 00:05:41.463 "arbitration_burst": 0, 00:05:41.463 "low_priority_weight": 0, 00:05:41.463 "medium_priority_weight": 0, 00:05:41.463 "high_priority_weight": 0, 00:05:41.463 "nvme_adminq_poll_period_us": 10000, 00:05:41.463 "nvme_ioq_poll_period_us": 0, 00:05:41.463 "io_queue_requests": 0, 00:05:41.463 "delay_cmd_submit": true, 00:05:41.463 "transport_retry_count": 4, 00:05:41.463 "bdev_retry_count": 3, 00:05:41.463 "transport_ack_timeout": 0, 00:05:41.463 "ctrlr_loss_timeout_sec": 0, 00:05:41.463 "reconnect_delay_sec": 0, 00:05:41.463 "fast_io_fail_timeout_sec": 0, 00:05:41.463 "disable_auto_failback": false, 00:05:41.463 "generate_uuids": false, 00:05:41.463 "transport_tos": 0, 00:05:41.463 "nvme_error_stat": false, 00:05:41.463 "rdma_srq_size": 0, 00:05:41.463 "io_path_stat": false, 00:05:41.463 "allow_accel_sequence": false, 00:05:41.463 "rdma_max_cq_size": 0, 00:05:41.463 "rdma_cm_event_timeout_ms": 0, 00:05:41.463 "dhchap_digests": [ 00:05:41.463 "sha256", 00:05:41.463 "sha384", 00:05:41.463 "sha512" 00:05:41.463 ], 00:05:41.463 "dhchap_dhgroups": [ 00:05:41.463 "null", 00:05:41.463 "ffdhe2048", 00:05:41.463 "ffdhe3072", 00:05:41.463 "ffdhe4096", 00:05:41.463 "ffdhe6144", 00:05:41.463 "ffdhe8192" 00:05:41.463 ] 00:05:41.463 } 00:05:41.463 }, 00:05:41.463 { 00:05:41.463 "method": "bdev_nvme_set_hotplug", 00:05:41.463 "params": { 00:05:41.463 "period_us": 100000, 00:05:41.463 "enable": false 00:05:41.463 } 00:05:41.463 }, 00:05:41.463 { 00:05:41.463 "method": "bdev_wait_for_examine" 00:05:41.463 } 00:05:41.463 ] 00:05:41.463 }, 00:05:41.463 { 00:05:41.463 "subsystem": "scsi", 00:05:41.463 "config": null 00:05:41.463 }, 00:05:41.463 { 00:05:41.463 "subsystem": "scheduler", 00:05:41.463 "config": [ 00:05:41.463 { 00:05:41.463 "method": "framework_set_scheduler", 00:05:41.463 "params": { 00:05:41.463 "name": "static" 00:05:41.463 } 00:05:41.463 } 00:05:41.463 ] 00:05:41.463 }, 00:05:41.463 { 00:05:41.463 "subsystem": "vhost_scsi", 00:05:41.463 "config": [] 00:05:41.463 }, 00:05:41.463 { 00:05:41.463 "subsystem": "vhost_blk", 00:05:41.463 "config": [] 00:05:41.463 }, 00:05:41.463 { 00:05:41.463 "subsystem": "ublk", 00:05:41.463 "config": [] 00:05:41.463 }, 00:05:41.463 { 00:05:41.463 "subsystem": "nbd", 00:05:41.463 "config": [] 00:05:41.463 }, 00:05:41.463 { 00:05:41.463 "subsystem": "nvmf", 00:05:41.463 "config": [ 00:05:41.463 { 00:05:41.463 "method": "nvmf_set_config", 00:05:41.463 "params": { 00:05:41.463 "discovery_filter": "match_any", 00:05:41.463 "admin_cmd_passthru": { 00:05:41.463 "identify_ctrlr": false 00:05:41.463 } 00:05:41.463 } 00:05:41.463 }, 00:05:41.463 { 00:05:41.463 "method": "nvmf_set_max_subsystems", 00:05:41.463 "params": { 00:05:41.463 "max_subsystems": 1024 00:05:41.463 } 00:05:41.463 }, 00:05:41.463 { 00:05:41.463 "method": "nvmf_set_crdt", 00:05:41.463 "params": { 00:05:41.463 "crdt1": 0, 00:05:41.463 "crdt2": 0, 00:05:41.463 "crdt3": 0 00:05:41.463 } 00:05:41.463 }, 00:05:41.463 { 00:05:41.463 "method": "nvmf_create_transport", 00:05:41.463 "params": { 00:05:41.463 "trtype": "TCP", 00:05:41.463 "max_queue_depth": 128, 00:05:41.463 "max_io_qpairs_per_ctrlr": 127, 00:05:41.463 "in_capsule_data_size": 4096, 00:05:41.463 "max_io_size": 131072, 00:05:41.463 "io_unit_size": 131072, 00:05:41.463 "max_aq_depth": 128, 00:05:41.463 "num_shared_buffers": 511, 00:05:41.463 "buf_cache_size": 4294967295, 00:05:41.463 "dif_insert_or_strip": false, 00:05:41.463 "zcopy": false, 00:05:41.463 "c2h_success": true, 00:05:41.463 "sock_priority": 0, 00:05:41.463 "abort_timeout_sec": 1, 00:05:41.463 "ack_timeout": 0, 00:05:41.463 "data_wr_pool_size": 0 00:05:41.463 } 00:05:41.463 } 00:05:41.463 ] 00:05:41.463 }, 00:05:41.463 { 00:05:41.463 "subsystem": "iscsi", 00:05:41.463 "config": [ 00:05:41.463 { 00:05:41.463 "method": "iscsi_set_options", 00:05:41.463 "params": { 00:05:41.463 "node_base": "iqn.2016-06.io.spdk", 00:05:41.463 "max_sessions": 128, 00:05:41.463 "max_connections_per_session": 2, 00:05:41.463 "max_queue_depth": 64, 00:05:41.463 "default_time2wait": 2, 00:05:41.463 "default_time2retain": 20, 00:05:41.463 "first_burst_length": 8192, 00:05:41.463 "immediate_data": true, 00:05:41.463 "allow_duplicated_isid": false, 00:05:41.463 "error_recovery_level": 0, 00:05:41.463 "nop_timeout": 60, 00:05:41.463 "nop_in_interval": 30, 00:05:41.463 "disable_chap": false, 00:05:41.463 "require_chap": false, 00:05:41.463 "mutual_chap": false, 00:05:41.463 "chap_group": 0, 00:05:41.463 "max_large_datain_per_connection": 64, 00:05:41.463 "max_r2t_per_connection": 4, 00:05:41.463 "pdu_pool_size": 36864, 00:05:41.463 "immediate_data_pool_size": 16384, 00:05:41.463 "data_out_pool_size": 2048 00:05:41.463 } 00:05:41.463 } 00:05:41.463 ] 00:05:41.463 } 00:05:41.463 ] 00:05:41.463 } 00:05:41.463 19:21:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:41.463 19:21:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3364754 00:05:41.463 19:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3364754 ']' 00:05:41.463 19:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3364754 00:05:41.463 19:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:41.463 19:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:41.463 19:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3364754 00:05:41.463 19:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:41.463 19:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:41.463 19:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3364754' 00:05:41.463 killing process with pid 3364754 00:05:41.463 19:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3364754 00:05:41.463 19:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3364754 00:05:41.724 19:21:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3365296 00:05:41.724 19:21:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:41.724 19:21:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:47.007 19:21:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3365296 00:05:47.007 19:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3365296 ']' 00:05:47.007 19:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3365296 00:05:47.007 19:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:47.007 19:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:47.007 19:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3365296 00:05:47.007 19:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:47.007 19:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:47.007 19:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3365296' 00:05:47.007 killing process with pid 3365296 00:05:47.007 19:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3365296 00:05:47.007 19:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3365296 00:05:47.007 19:21:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:47.007 19:21:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:47.007 00:05:47.007 real 0m6.641s 00:05:47.007 user 0m6.553s 00:05:47.007 sys 0m0.587s 00:05:47.007 19:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.007 19:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.007 ************************************ 00:05:47.007 END TEST skip_rpc_with_json 00:05:47.007 ************************************ 00:05:47.007 19:21:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:47.007 19:21:13 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.007 19:21:13 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.007 19:21:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.007 ************************************ 00:05:47.007 START TEST skip_rpc_with_delay 00:05:47.007 ************************************ 00:05:47.007 19:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:47.007 19:21:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:47.007 19:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:47.007 19:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:47.007 19:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.007 19:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.007 19:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.007 19:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.007 19:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.007 19:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.007 19:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.007 19:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:47.007 19:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:47.007 [2024-05-15 19:21:13.139730] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:47.007 [2024-05-15 19:21:13.139834] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:47.007 19:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:47.007 19:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:47.007 19:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:47.007 19:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:47.007 00:05:47.007 real 0m0.075s 00:05:47.007 user 0m0.040s 00:05:47.007 sys 0m0.034s 00:05:47.007 19:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.007 19:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:47.007 ************************************ 00:05:47.007 END TEST skip_rpc_with_delay 00:05:47.007 ************************************ 00:05:47.267 19:21:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:47.267 19:21:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:47.267 19:21:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:47.267 19:21:13 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.267 19:21:13 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.267 19:21:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.267 ************************************ 00:05:47.267 START TEST exit_on_failed_rpc_init 00:05:47.267 ************************************ 00:05:47.267 19:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:47.267 19:21:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3366705 00:05:47.268 19:21:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3366705 00:05:47.268 19:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3366705 ']' 00:05:47.268 19:21:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.268 19:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.268 19:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:47.268 19:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.268 19:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:47.268 19:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:47.268 [2024-05-15 19:21:13.310456] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:05:47.268 [2024-05-15 19:21:13.310512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3366705 ] 00:05:47.268 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.268 [2024-05-15 19:21:13.396716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.528 [2024-05-15 19:21:13.468469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.098 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:48.098 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:48.098 19:21:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.098 19:21:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.098 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:48.098 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.098 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.098 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.098 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.098 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.099 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.099 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.099 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.099 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:48.099 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.099 [2024-05-15 19:21:14.237834] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:05:48.099 [2024-05-15 19:21:14.237885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3366955 ] 00:05:48.099 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.360 [2024-05-15 19:21:14.301346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.360 [2024-05-15 19:21:14.365234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.360 [2024-05-15 19:21:14.365294] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:48.360 [2024-05-15 19:21:14.365304] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:48.360 [2024-05-15 19:21:14.365311] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:48.360 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:48.360 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.360 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:48.360 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:48.360 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:48.360 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.360 19:21:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:48.360 19:21:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3366705 00:05:48.360 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3366705 ']' 00:05:48.360 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3366705 00:05:48.360 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:48.360 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:48.360 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3366705 00:05:48.360 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:48.360 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:48.360 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3366705' 00:05:48.360 killing process with pid 3366705 00:05:48.360 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3366705 00:05:48.360 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3366705 00:05:48.620 00:05:48.620 real 0m1.436s 00:05:48.620 user 0m1.722s 00:05:48.620 sys 0m0.400s 00:05:48.620 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.620 19:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:48.621 ************************************ 00:05:48.621 END TEST exit_on_failed_rpc_init 00:05:48.621 ************************************ 00:05:48.621 19:21:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:48.621 00:05:48.621 real 0m13.876s 00:05:48.621 user 0m13.532s 00:05:48.621 sys 0m1.564s 00:05:48.621 19:21:14 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.621 19:21:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.621 ************************************ 00:05:48.621 END TEST skip_rpc 00:05:48.621 ************************************ 00:05:48.621 19:21:14 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:48.621 19:21:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.621 19:21:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.621 19:21:14 -- common/autotest_common.sh@10 -- # set +x 00:05:48.881 ************************************ 00:05:48.881 START TEST rpc_client 00:05:48.881 ************************************ 00:05:48.881 19:21:14 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:48.881 * Looking for test storage... 00:05:48.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:48.881 19:21:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:48.881 OK 00:05:48.881 19:21:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:48.881 00:05:48.881 real 0m0.136s 00:05:48.881 user 0m0.058s 00:05:48.881 sys 0m0.087s 00:05:48.881 19:21:14 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.881 19:21:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:48.881 ************************************ 00:05:48.881 END TEST rpc_client 00:05:48.881 ************************************ 00:05:48.881 19:21:14 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:48.881 19:21:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.881 19:21:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.881 19:21:14 -- common/autotest_common.sh@10 -- # set +x 00:05:48.881 ************************************ 00:05:48.881 START TEST json_config 00:05:48.881 ************************************ 00:05:48.881 19:21:15 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:49.142 19:21:15 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:49.142 19:21:15 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:49.142 19:21:15 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.142 19:21:15 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.142 19:21:15 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.142 19:21:15 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.142 19:21:15 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.142 19:21:15 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.142 19:21:15 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.142 19:21:15 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.142 19:21:15 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.142 19:21:15 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.142 19:21:15 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:49.142 19:21:15 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:49.142 19:21:15 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.142 19:21:15 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.143 19:21:15 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:49.143 19:21:15 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.143 19:21:15 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:49.143 19:21:15 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.143 19:21:15 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.143 19:21:15 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.143 19:21:15 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.143 19:21:15 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.143 19:21:15 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.143 19:21:15 json_config -- paths/export.sh@5 -- # export PATH 00:05:49.143 19:21:15 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.143 19:21:15 json_config -- nvmf/common.sh@47 -- # : 0 00:05:49.143 19:21:15 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:49.143 19:21:15 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:49.143 19:21:15 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:49.143 19:21:15 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.143 19:21:15 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.143 19:21:15 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:49.143 19:21:15 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:49.143 19:21:15 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:49.143 19:21:15 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:49.143 19:21:15 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:49.143 19:21:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:49.143 19:21:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:49.143 19:21:15 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:49.143 19:21:15 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:49.143 19:21:15 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:49.143 19:21:15 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:49.143 19:21:15 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:49.143 19:21:15 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:49.143 19:21:15 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:49.143 19:21:15 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:49.143 19:21:15 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:49.143 19:21:15 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:49.143 19:21:15 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:49.143 19:21:15 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:49.143 INFO: JSON configuration test init 00:05:49.143 19:21:15 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:49.143 19:21:15 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:49.143 19:21:15 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:49.143 19:21:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.143 19:21:15 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:49.143 19:21:15 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:49.143 19:21:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.143 19:21:15 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:49.143 19:21:15 json_config -- json_config/common.sh@9 -- # local app=target 00:05:49.143 19:21:15 json_config -- json_config/common.sh@10 -- # shift 00:05:49.143 19:21:15 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:49.143 19:21:15 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:49.143 19:21:15 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:49.143 19:21:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.143 19:21:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.143 19:21:15 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3367245 00:05:49.143 19:21:15 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:49.143 Waiting for target to run... 00:05:49.143 19:21:15 json_config -- json_config/common.sh@25 -- # waitforlisten 3367245 /var/tmp/spdk_tgt.sock 00:05:49.143 19:21:15 json_config -- common/autotest_common.sh@827 -- # '[' -z 3367245 ']' 00:05:49.143 19:21:15 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:49.143 19:21:15 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:49.143 19:21:15 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:49.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:49.143 19:21:15 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:49.143 19:21:15 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:49.143 19:21:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.143 [2024-05-15 19:21:15.219900] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:05:49.143 [2024-05-15 19:21:15.219968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3367245 ] 00:05:49.143 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.714 [2024-05-15 19:21:15.661142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.714 [2024-05-15 19:21:15.719590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.975 19:21:16 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:49.975 19:21:16 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:49.975 19:21:16 json_config -- json_config/common.sh@26 -- # echo '' 00:05:49.975 00:05:49.975 19:21:16 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:49.975 19:21:16 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:49.975 19:21:16 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:49.975 19:21:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.975 19:21:16 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:49.975 19:21:16 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:49.975 19:21:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:49.975 19:21:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.975 19:21:16 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:49.975 19:21:16 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:49.975 19:21:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:50.545 19:21:16 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:50.545 19:21:16 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:50.545 19:21:16 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:50.545 19:21:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.545 19:21:16 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:50.545 19:21:16 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:50.545 19:21:16 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:50.545 19:21:16 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:50.545 19:21:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:50.545 19:21:16 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:50.806 19:21:16 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:50.806 19:21:16 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:50.806 19:21:16 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:50.806 19:21:16 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:50.806 19:21:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:50.806 19:21:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.806 19:21:16 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:50.806 19:21:16 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:50.806 19:21:16 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:50.806 19:21:16 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:50.806 19:21:16 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:50.806 19:21:16 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:50.806 19:21:16 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:50.806 19:21:16 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:50.806 19:21:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.806 19:21:16 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:50.806 19:21:16 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:50.806 19:21:16 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:50.806 19:21:16 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:50.806 19:21:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:51.121 MallocForNvmf0 00:05:51.121 19:21:17 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:51.121 19:21:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:51.397 MallocForNvmf1 00:05:51.397 19:21:17 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:51.397 19:21:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:51.397 [2024-05-15 19:21:17.513500] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:51.397 19:21:17 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:51.397 19:21:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:51.657 19:21:17 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:51.657 19:21:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:51.917 19:21:17 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:51.917 19:21:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:52.177 19:21:18 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:52.177 19:21:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:52.177 [2024-05-15 19:21:18.295563] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:52.177 [2024-05-15 19:21:18.296122] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:52.177 19:21:18 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:52.177 19:21:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:52.177 19:21:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.177 19:21:18 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:52.177 19:21:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:52.177 19:21:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.438 19:21:18 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:52.438 19:21:18 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:52.439 19:21:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:52.439 MallocBdevForConfigChangeCheck 00:05:52.439 19:21:18 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:52.439 19:21:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:52.439 19:21:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.699 19:21:18 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:52.699 19:21:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:52.959 19:21:18 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:52.959 INFO: shutting down applications... 00:05:52.959 19:21:18 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:52.959 19:21:18 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:52.959 19:21:18 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:52.959 19:21:18 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:53.219 Calling clear_iscsi_subsystem 00:05:53.219 Calling clear_nvmf_subsystem 00:05:53.219 Calling clear_nbd_subsystem 00:05:53.219 Calling clear_ublk_subsystem 00:05:53.219 Calling clear_vhost_blk_subsystem 00:05:53.219 Calling clear_vhost_scsi_subsystem 00:05:53.219 Calling clear_bdev_subsystem 00:05:53.479 19:21:19 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:53.479 19:21:19 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:53.479 19:21:19 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:53.479 19:21:19 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:53.479 19:21:19 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:53.479 19:21:19 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:53.739 19:21:19 json_config -- json_config/json_config.sh@345 -- # break 00:05:53.739 19:21:19 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:53.739 19:21:19 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:53.739 19:21:19 json_config -- json_config/common.sh@31 -- # local app=target 00:05:53.739 19:21:19 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:53.739 19:21:19 json_config -- json_config/common.sh@35 -- # [[ -n 3367245 ]] 00:05:53.739 19:21:19 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3367245 00:05:53.739 [2024-05-15 19:21:19.766253] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:53.739 19:21:19 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:53.739 19:21:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.739 19:21:19 json_config -- json_config/common.sh@41 -- # kill -0 3367245 00:05:53.739 19:21:19 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:54.310 19:21:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:54.310 19:21:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:54.310 19:21:20 json_config -- json_config/common.sh@41 -- # kill -0 3367245 00:05:54.310 19:21:20 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:54.310 19:21:20 json_config -- json_config/common.sh@43 -- # break 00:05:54.310 19:21:20 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:54.310 19:21:20 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:54.310 SPDK target shutdown done 00:05:54.310 19:21:20 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:54.310 INFO: relaunching applications... 00:05:54.310 19:21:20 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.310 19:21:20 json_config -- json_config/common.sh@9 -- # local app=target 00:05:54.310 19:21:20 json_config -- json_config/common.sh@10 -- # shift 00:05:54.310 19:21:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:54.310 19:21:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:54.310 19:21:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:54.310 19:21:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.310 19:21:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.310 19:21:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3368411 00:05:54.310 19:21:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:54.310 Waiting for target to run... 00:05:54.310 19:21:20 json_config -- json_config/common.sh@25 -- # waitforlisten 3368411 /var/tmp/spdk_tgt.sock 00:05:54.310 19:21:20 json_config -- common/autotest_common.sh@827 -- # '[' -z 3368411 ']' 00:05:54.310 19:21:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.310 19:21:20 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:54.310 19:21:20 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:54.310 19:21:20 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:54.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:54.310 19:21:20 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:54.310 19:21:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.310 [2024-05-15 19:21:20.330927] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:05:54.310 [2024-05-15 19:21:20.331010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3368411 ] 00:05:54.310 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.570 [2024-05-15 19:21:20.650625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.570 [2024-05-15 19:21:20.713135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.140 [2024-05-15 19:21:21.202182] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:55.140 [2024-05-15 19:21:21.234164] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:55.140 [2024-05-15 19:21:21.234720] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:55.140 19:21:21 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:55.140 19:21:21 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:55.140 19:21:21 json_config -- json_config/common.sh@26 -- # echo '' 00:05:55.140 00:05:55.140 19:21:21 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:55.140 19:21:21 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:55.140 INFO: Checking if target configuration is the same... 00:05:55.140 19:21:21 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.140 19:21:21 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:55.140 19:21:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:55.140 + '[' 2 -ne 2 ']' 00:05:55.140 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:55.140 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:55.140 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:55.140 +++ basename /dev/fd/62 00:05:55.140 ++ mktemp /tmp/62.XXX 00:05:55.140 + tmp_file_1=/tmp/62.k9c 00:05:55.140 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.140 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:55.140 + tmp_file_2=/tmp/spdk_tgt_config.json.fVz 00:05:55.140 + ret=0 00:05:55.140 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:55.710 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:55.710 + diff -u /tmp/62.k9c /tmp/spdk_tgt_config.json.fVz 00:05:55.710 + echo 'INFO: JSON config files are the same' 00:05:55.710 INFO: JSON config files are the same 00:05:55.710 + rm /tmp/62.k9c /tmp/spdk_tgt_config.json.fVz 00:05:55.710 + exit 0 00:05:55.710 19:21:21 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:55.710 19:21:21 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:55.710 INFO: changing configuration and checking if this can be detected... 00:05:55.710 19:21:21 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:55.710 19:21:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:55.970 19:21:21 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.970 19:21:21 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:55.970 19:21:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:55.970 + '[' 2 -ne 2 ']' 00:05:55.970 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:55.970 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:55.970 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:55.970 +++ basename /dev/fd/62 00:05:55.970 ++ mktemp /tmp/62.XXX 00:05:55.970 + tmp_file_1=/tmp/62.8ax 00:05:55.970 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.970 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:55.970 + tmp_file_2=/tmp/spdk_tgt_config.json.9i9 00:05:55.970 + ret=0 00:05:55.970 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:56.229 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:56.229 + diff -u /tmp/62.8ax /tmp/spdk_tgt_config.json.9i9 00:05:56.229 + ret=1 00:05:56.229 + echo '=== Start of file: /tmp/62.8ax ===' 00:05:56.229 + cat /tmp/62.8ax 00:05:56.229 + echo '=== End of file: /tmp/62.8ax ===' 00:05:56.229 + echo '' 00:05:56.229 + echo '=== Start of file: /tmp/spdk_tgt_config.json.9i9 ===' 00:05:56.229 + cat /tmp/spdk_tgt_config.json.9i9 00:05:56.229 + echo '=== End of file: /tmp/spdk_tgt_config.json.9i9 ===' 00:05:56.229 + echo '' 00:05:56.229 + rm /tmp/62.8ax /tmp/spdk_tgt_config.json.9i9 00:05:56.229 + exit 1 00:05:56.229 19:21:22 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:56.229 INFO: configuration change detected. 00:05:56.229 19:21:22 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:56.229 19:21:22 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:56.229 19:21:22 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:56.229 19:21:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.229 19:21:22 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:56.229 19:21:22 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:56.229 19:21:22 json_config -- json_config/json_config.sh@317 -- # [[ -n 3368411 ]] 00:05:56.229 19:21:22 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:56.229 19:21:22 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:56.229 19:21:22 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:56.229 19:21:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.229 19:21:22 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:56.229 19:21:22 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:56.229 19:21:22 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:56.229 19:21:22 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:56.229 19:21:22 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:56.229 19:21:22 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:56.229 19:21:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:56.229 19:21:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.229 19:21:22 json_config -- json_config/json_config.sh@323 -- # killprocess 3368411 00:05:56.229 19:21:22 json_config -- common/autotest_common.sh@946 -- # '[' -z 3368411 ']' 00:05:56.229 19:21:22 json_config -- common/autotest_common.sh@950 -- # kill -0 3368411 00:05:56.229 19:21:22 json_config -- common/autotest_common.sh@951 -- # uname 00:05:56.229 19:21:22 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:56.229 19:21:22 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3368411 00:05:56.489 19:21:22 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:56.489 19:21:22 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:56.489 19:21:22 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3368411' 00:05:56.489 killing process with pid 3368411 00:05:56.489 19:21:22 json_config -- common/autotest_common.sh@965 -- # kill 3368411 00:05:56.489 [2024-05-15 19:21:22.437679] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:56.489 19:21:22 json_config -- common/autotest_common.sh@970 -- # wait 3368411 00:05:56.750 19:21:22 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:56.750 19:21:22 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:56.750 19:21:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:56.750 19:21:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.750 19:21:22 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:56.750 19:21:22 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:56.750 INFO: Success 00:05:56.750 00:05:56.750 real 0m7.720s 00:05:56.750 user 0m9.741s 00:05:56.750 sys 0m2.002s 00:05:56.750 19:21:22 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.750 19:21:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.750 ************************************ 00:05:56.750 END TEST json_config 00:05:56.750 ************************************ 00:05:56.750 19:21:22 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:56.750 19:21:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.750 19:21:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.750 19:21:22 -- common/autotest_common.sh@10 -- # set +x 00:05:56.750 ************************************ 00:05:56.750 START TEST json_config_extra_key 00:05:56.750 ************************************ 00:05:56.750 19:21:22 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:56.750 19:21:22 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:56.750 19:21:22 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.750 19:21:22 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.750 19:21:22 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.750 19:21:22 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.750 19:21:22 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.750 19:21:22 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.750 19:21:22 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:56.750 19:21:22 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:56.750 19:21:22 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:57.010 19:21:22 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:57.010 19:21:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:57.010 19:21:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:57.010 19:21:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:57.010 19:21:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:57.010 19:21:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:57.010 19:21:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:57.010 19:21:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:57.010 19:21:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:57.010 19:21:22 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:57.010 19:21:22 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:57.010 INFO: launching applications... 00:05:57.010 19:21:22 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:57.010 19:21:22 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:57.010 19:21:22 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:57.010 19:21:22 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:57.010 19:21:22 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:57.010 19:21:22 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:57.010 19:21:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.010 19:21:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.010 19:21:22 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3368986 00:05:57.010 19:21:22 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:57.010 Waiting for target to run... 00:05:57.010 19:21:22 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3368986 /var/tmp/spdk_tgt.sock 00:05:57.010 19:21:22 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 3368986 ']' 00:05:57.010 19:21:22 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.010 19:21:22 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:57.010 19:21:22 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:57.010 19:21:22 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.010 19:21:22 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:57.010 19:21:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:57.010 [2024-05-15 19:21:22.994814] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:05:57.010 [2024-05-15 19:21:22.994874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3368986 ] 00:05:57.010 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.269 [2024-05-15 19:21:23.279158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.269 [2024-05-15 19:21:23.338549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.839 19:21:23 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:57.839 19:21:23 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:57.839 19:21:23 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:57.839 00:05:57.839 19:21:23 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:57.839 INFO: shutting down applications... 00:05:57.839 19:21:23 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:57.839 19:21:23 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:57.839 19:21:23 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:57.839 19:21:23 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3368986 ]] 00:05:57.839 19:21:23 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3368986 00:05:57.839 19:21:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:57.839 19:21:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:57.839 19:21:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3368986 00:05:57.839 19:21:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:58.409 19:21:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:58.409 19:21:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.409 19:21:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3368986 00:05:58.409 19:21:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:58.409 19:21:24 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:58.409 19:21:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:58.409 19:21:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:58.409 SPDK target shutdown done 00:05:58.409 19:21:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:58.409 Success 00:05:58.409 00:05:58.409 real 0m1.525s 00:05:58.409 user 0m1.264s 00:05:58.409 sys 0m0.365s 00:05:58.409 19:21:24 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.409 19:21:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:58.409 ************************************ 00:05:58.409 END TEST json_config_extra_key 00:05:58.409 ************************************ 00:05:58.409 19:21:24 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:58.409 19:21:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.409 19:21:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.409 19:21:24 -- common/autotest_common.sh@10 -- # set +x 00:05:58.409 ************************************ 00:05:58.409 START TEST alias_rpc 00:05:58.409 ************************************ 00:05:58.409 19:21:24 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:58.409 * Looking for test storage... 00:05:58.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:58.409 19:21:24 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:58.409 19:21:24 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3369371 00:05:58.409 19:21:24 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3369371 00:05:58.409 19:21:24 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 3369371 ']' 00:05:58.409 19:21:24 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.409 19:21:24 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:58.409 19:21:24 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.409 19:21:24 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:58.409 19:21:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.409 19:21:24 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.409 [2024-05-15 19:21:24.589814] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:05:58.409 [2024-05-15 19:21:24.589878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3369371 ] 00:05:58.669 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.669 [2024-05-15 19:21:24.675101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.669 [2024-05-15 19:21:24.741021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.608 19:21:25 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:59.608 19:21:25 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:59.608 19:21:25 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:59.608 19:21:25 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3369371 00:05:59.608 19:21:25 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 3369371 ']' 00:05:59.608 19:21:25 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 3369371 00:05:59.608 19:21:25 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:59.608 19:21:25 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:59.608 19:21:25 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3369371 00:05:59.608 19:21:25 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:59.608 19:21:25 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:59.608 19:21:25 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3369371' 00:05:59.608 killing process with pid 3369371 00:05:59.608 19:21:25 alias_rpc -- common/autotest_common.sh@965 -- # kill 3369371 00:05:59.608 19:21:25 alias_rpc -- common/autotest_common.sh@970 -- # wait 3369371 00:05:59.868 00:05:59.868 real 0m1.491s 00:05:59.868 user 0m1.738s 00:05:59.868 sys 0m0.385s 00:05:59.868 19:21:25 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.868 19:21:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.868 ************************************ 00:05:59.868 END TEST alias_rpc 00:05:59.868 ************************************ 00:05:59.868 19:21:25 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:59.868 19:21:25 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:59.868 19:21:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:59.868 19:21:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.868 19:21:25 -- common/autotest_common.sh@10 -- # set +x 00:05:59.868 ************************************ 00:05:59.868 START TEST spdkcli_tcp 00:05:59.868 ************************************ 00:05:59.868 19:21:26 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:00.127 * Looking for test storage... 00:06:00.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:00.127 19:21:26 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:00.127 19:21:26 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:00.127 19:21:26 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:00.127 19:21:26 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:00.127 19:21:26 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:00.127 19:21:26 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:00.127 19:21:26 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:00.127 19:21:26 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:00.127 19:21:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.127 19:21:26 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3369761 00:06:00.127 19:21:26 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3369761 00:06:00.127 19:21:26 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:00.127 19:21:26 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 3369761 ']' 00:06:00.128 19:21:26 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.128 19:21:26 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:00.128 19:21:26 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.128 19:21:26 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:00.128 19:21:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.128 [2024-05-15 19:21:26.168008] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:00.128 [2024-05-15 19:21:26.168076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3369761 ] 00:06:00.128 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.128 [2024-05-15 19:21:26.253405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.388 [2024-05-15 19:21:26.325423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.388 [2024-05-15 19:21:26.325580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.958 19:21:27 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:00.958 19:21:27 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:00.958 19:21:27 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:00.958 19:21:27 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3369973 00:06:00.958 19:21:27 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:01.218 [ 00:06:01.218 "bdev_malloc_delete", 00:06:01.218 "bdev_malloc_create", 00:06:01.218 "bdev_null_resize", 00:06:01.218 "bdev_null_delete", 00:06:01.218 "bdev_null_create", 00:06:01.218 "bdev_nvme_cuse_unregister", 00:06:01.218 "bdev_nvme_cuse_register", 00:06:01.218 "bdev_opal_new_user", 00:06:01.218 "bdev_opal_set_lock_state", 00:06:01.218 "bdev_opal_delete", 00:06:01.218 "bdev_opal_get_info", 00:06:01.218 "bdev_opal_create", 00:06:01.218 "bdev_nvme_opal_revert", 00:06:01.218 "bdev_nvme_opal_init", 00:06:01.218 "bdev_nvme_send_cmd", 00:06:01.218 "bdev_nvme_get_path_iostat", 00:06:01.218 "bdev_nvme_get_mdns_discovery_info", 00:06:01.218 "bdev_nvme_stop_mdns_discovery", 00:06:01.218 "bdev_nvme_start_mdns_discovery", 00:06:01.218 "bdev_nvme_set_multipath_policy", 00:06:01.218 "bdev_nvme_set_preferred_path", 00:06:01.218 "bdev_nvme_get_io_paths", 00:06:01.218 "bdev_nvme_remove_error_injection", 00:06:01.218 "bdev_nvme_add_error_injection", 00:06:01.218 "bdev_nvme_get_discovery_info", 00:06:01.218 "bdev_nvme_stop_discovery", 00:06:01.218 "bdev_nvme_start_discovery", 00:06:01.218 "bdev_nvme_get_controller_health_info", 00:06:01.218 "bdev_nvme_disable_controller", 00:06:01.218 "bdev_nvme_enable_controller", 00:06:01.218 "bdev_nvme_reset_controller", 00:06:01.218 "bdev_nvme_get_transport_statistics", 00:06:01.218 "bdev_nvme_apply_firmware", 00:06:01.218 "bdev_nvme_detach_controller", 00:06:01.218 "bdev_nvme_get_controllers", 00:06:01.218 "bdev_nvme_attach_controller", 00:06:01.218 "bdev_nvme_set_hotplug", 00:06:01.218 "bdev_nvme_set_options", 00:06:01.218 "bdev_passthru_delete", 00:06:01.218 "bdev_passthru_create", 00:06:01.218 "bdev_lvol_check_shallow_copy", 00:06:01.218 "bdev_lvol_start_shallow_copy", 00:06:01.218 "bdev_lvol_grow_lvstore", 00:06:01.218 "bdev_lvol_get_lvols", 00:06:01.218 "bdev_lvol_get_lvstores", 00:06:01.218 "bdev_lvol_delete", 00:06:01.218 "bdev_lvol_set_read_only", 00:06:01.218 "bdev_lvol_resize", 00:06:01.218 "bdev_lvol_decouple_parent", 00:06:01.218 "bdev_lvol_inflate", 00:06:01.218 "bdev_lvol_rename", 00:06:01.218 "bdev_lvol_clone_bdev", 00:06:01.218 "bdev_lvol_clone", 00:06:01.218 "bdev_lvol_snapshot", 00:06:01.218 "bdev_lvol_create", 00:06:01.218 "bdev_lvol_delete_lvstore", 00:06:01.218 "bdev_lvol_rename_lvstore", 00:06:01.218 "bdev_lvol_create_lvstore", 00:06:01.218 "bdev_raid_set_options", 00:06:01.218 "bdev_raid_remove_base_bdev", 00:06:01.218 "bdev_raid_add_base_bdev", 00:06:01.218 "bdev_raid_delete", 00:06:01.218 "bdev_raid_create", 00:06:01.218 "bdev_raid_get_bdevs", 00:06:01.218 "bdev_error_inject_error", 00:06:01.218 "bdev_error_delete", 00:06:01.218 "bdev_error_create", 00:06:01.218 "bdev_split_delete", 00:06:01.218 "bdev_split_create", 00:06:01.218 "bdev_delay_delete", 00:06:01.218 "bdev_delay_create", 00:06:01.218 "bdev_delay_update_latency", 00:06:01.218 "bdev_zone_block_delete", 00:06:01.218 "bdev_zone_block_create", 00:06:01.218 "blobfs_create", 00:06:01.218 "blobfs_detect", 00:06:01.218 "blobfs_set_cache_size", 00:06:01.218 "bdev_aio_delete", 00:06:01.218 "bdev_aio_rescan", 00:06:01.218 "bdev_aio_create", 00:06:01.218 "bdev_ftl_set_property", 00:06:01.218 "bdev_ftl_get_properties", 00:06:01.218 "bdev_ftl_get_stats", 00:06:01.218 "bdev_ftl_unmap", 00:06:01.218 "bdev_ftl_unload", 00:06:01.218 "bdev_ftl_delete", 00:06:01.218 "bdev_ftl_load", 00:06:01.218 "bdev_ftl_create", 00:06:01.218 "bdev_virtio_attach_controller", 00:06:01.218 "bdev_virtio_scsi_get_devices", 00:06:01.218 "bdev_virtio_detach_controller", 00:06:01.218 "bdev_virtio_blk_set_hotplug", 00:06:01.218 "bdev_iscsi_delete", 00:06:01.218 "bdev_iscsi_create", 00:06:01.218 "bdev_iscsi_set_options", 00:06:01.218 "accel_error_inject_error", 00:06:01.218 "ioat_scan_accel_module", 00:06:01.219 "dsa_scan_accel_module", 00:06:01.219 "iaa_scan_accel_module", 00:06:01.219 "vfu_virtio_create_scsi_endpoint", 00:06:01.219 "vfu_virtio_scsi_remove_target", 00:06:01.219 "vfu_virtio_scsi_add_target", 00:06:01.219 "vfu_virtio_create_blk_endpoint", 00:06:01.219 "vfu_virtio_delete_endpoint", 00:06:01.219 "keyring_file_remove_key", 00:06:01.219 "keyring_file_add_key", 00:06:01.219 "iscsi_get_histogram", 00:06:01.219 "iscsi_enable_histogram", 00:06:01.219 "iscsi_set_options", 00:06:01.219 "iscsi_get_auth_groups", 00:06:01.219 "iscsi_auth_group_remove_secret", 00:06:01.219 "iscsi_auth_group_add_secret", 00:06:01.219 "iscsi_delete_auth_group", 00:06:01.219 "iscsi_create_auth_group", 00:06:01.219 "iscsi_set_discovery_auth", 00:06:01.219 "iscsi_get_options", 00:06:01.219 "iscsi_target_node_request_logout", 00:06:01.219 "iscsi_target_node_set_redirect", 00:06:01.219 "iscsi_target_node_set_auth", 00:06:01.219 "iscsi_target_node_add_lun", 00:06:01.219 "iscsi_get_stats", 00:06:01.219 "iscsi_get_connections", 00:06:01.219 "iscsi_portal_group_set_auth", 00:06:01.219 "iscsi_start_portal_group", 00:06:01.219 "iscsi_delete_portal_group", 00:06:01.219 "iscsi_create_portal_group", 00:06:01.219 "iscsi_get_portal_groups", 00:06:01.219 "iscsi_delete_target_node", 00:06:01.219 "iscsi_target_node_remove_pg_ig_maps", 00:06:01.219 "iscsi_target_node_add_pg_ig_maps", 00:06:01.219 "iscsi_create_target_node", 00:06:01.219 "iscsi_get_target_nodes", 00:06:01.219 "iscsi_delete_initiator_group", 00:06:01.219 "iscsi_initiator_group_remove_initiators", 00:06:01.219 "iscsi_initiator_group_add_initiators", 00:06:01.219 "iscsi_create_initiator_group", 00:06:01.219 "iscsi_get_initiator_groups", 00:06:01.219 "nvmf_set_crdt", 00:06:01.219 "nvmf_set_config", 00:06:01.219 "nvmf_set_max_subsystems", 00:06:01.219 "nvmf_stop_mdns_prr", 00:06:01.219 "nvmf_publish_mdns_prr", 00:06:01.219 "nvmf_subsystem_get_listeners", 00:06:01.219 "nvmf_subsystem_get_qpairs", 00:06:01.219 "nvmf_subsystem_get_controllers", 00:06:01.219 "nvmf_get_stats", 00:06:01.219 "nvmf_get_transports", 00:06:01.219 "nvmf_create_transport", 00:06:01.219 "nvmf_get_targets", 00:06:01.219 "nvmf_delete_target", 00:06:01.219 "nvmf_create_target", 00:06:01.219 "nvmf_subsystem_allow_any_host", 00:06:01.219 "nvmf_subsystem_remove_host", 00:06:01.219 "nvmf_subsystem_add_host", 00:06:01.219 "nvmf_ns_remove_host", 00:06:01.219 "nvmf_ns_add_host", 00:06:01.219 "nvmf_subsystem_remove_ns", 00:06:01.219 "nvmf_subsystem_add_ns", 00:06:01.219 "nvmf_subsystem_listener_set_ana_state", 00:06:01.219 "nvmf_discovery_get_referrals", 00:06:01.219 "nvmf_discovery_remove_referral", 00:06:01.219 "nvmf_discovery_add_referral", 00:06:01.219 "nvmf_subsystem_remove_listener", 00:06:01.219 "nvmf_subsystem_add_listener", 00:06:01.219 "nvmf_delete_subsystem", 00:06:01.219 "nvmf_create_subsystem", 00:06:01.219 "nvmf_get_subsystems", 00:06:01.219 "env_dpdk_get_mem_stats", 00:06:01.219 "nbd_get_disks", 00:06:01.219 "nbd_stop_disk", 00:06:01.219 "nbd_start_disk", 00:06:01.219 "ublk_recover_disk", 00:06:01.219 "ublk_get_disks", 00:06:01.219 "ublk_stop_disk", 00:06:01.219 "ublk_start_disk", 00:06:01.219 "ublk_destroy_target", 00:06:01.219 "ublk_create_target", 00:06:01.219 "virtio_blk_create_transport", 00:06:01.219 "virtio_blk_get_transports", 00:06:01.219 "vhost_controller_set_coalescing", 00:06:01.219 "vhost_get_controllers", 00:06:01.219 "vhost_delete_controller", 00:06:01.219 "vhost_create_blk_controller", 00:06:01.219 "vhost_scsi_controller_remove_target", 00:06:01.219 "vhost_scsi_controller_add_target", 00:06:01.219 "vhost_start_scsi_controller", 00:06:01.219 "vhost_create_scsi_controller", 00:06:01.219 "thread_set_cpumask", 00:06:01.219 "framework_get_scheduler", 00:06:01.219 "framework_set_scheduler", 00:06:01.219 "framework_get_reactors", 00:06:01.219 "thread_get_io_channels", 00:06:01.219 "thread_get_pollers", 00:06:01.219 "thread_get_stats", 00:06:01.219 "framework_monitor_context_switch", 00:06:01.219 "spdk_kill_instance", 00:06:01.219 "log_enable_timestamps", 00:06:01.219 "log_get_flags", 00:06:01.219 "log_clear_flag", 00:06:01.219 "log_set_flag", 00:06:01.219 "log_get_level", 00:06:01.219 "log_set_level", 00:06:01.219 "log_get_print_level", 00:06:01.219 "log_set_print_level", 00:06:01.219 "framework_enable_cpumask_locks", 00:06:01.219 "framework_disable_cpumask_locks", 00:06:01.219 "framework_wait_init", 00:06:01.219 "framework_start_init", 00:06:01.219 "scsi_get_devices", 00:06:01.219 "bdev_get_histogram", 00:06:01.219 "bdev_enable_histogram", 00:06:01.219 "bdev_set_qos_limit", 00:06:01.219 "bdev_set_qd_sampling_period", 00:06:01.219 "bdev_get_bdevs", 00:06:01.219 "bdev_reset_iostat", 00:06:01.219 "bdev_get_iostat", 00:06:01.219 "bdev_examine", 00:06:01.219 "bdev_wait_for_examine", 00:06:01.219 "bdev_set_options", 00:06:01.219 "notify_get_notifications", 00:06:01.219 "notify_get_types", 00:06:01.219 "accel_get_stats", 00:06:01.219 "accel_set_options", 00:06:01.219 "accel_set_driver", 00:06:01.219 "accel_crypto_key_destroy", 00:06:01.219 "accel_crypto_keys_get", 00:06:01.219 "accel_crypto_key_create", 00:06:01.219 "accel_assign_opc", 00:06:01.219 "accel_get_module_info", 00:06:01.219 "accel_get_opc_assignments", 00:06:01.219 "vmd_rescan", 00:06:01.219 "vmd_remove_device", 00:06:01.219 "vmd_enable", 00:06:01.219 "sock_get_default_impl", 00:06:01.219 "sock_set_default_impl", 00:06:01.219 "sock_impl_set_options", 00:06:01.219 "sock_impl_get_options", 00:06:01.219 "iobuf_get_stats", 00:06:01.219 "iobuf_set_options", 00:06:01.219 "keyring_get_keys", 00:06:01.219 "framework_get_pci_devices", 00:06:01.219 "framework_get_config", 00:06:01.219 "framework_get_subsystems", 00:06:01.219 "vfu_tgt_set_base_path", 00:06:01.219 "trace_get_info", 00:06:01.219 "trace_get_tpoint_group_mask", 00:06:01.219 "trace_disable_tpoint_group", 00:06:01.219 "trace_enable_tpoint_group", 00:06:01.219 "trace_clear_tpoint_mask", 00:06:01.219 "trace_set_tpoint_mask", 00:06:01.219 "spdk_get_version", 00:06:01.219 "rpc_get_methods" 00:06:01.219 ] 00:06:01.219 19:21:27 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:01.219 19:21:27 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.219 19:21:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.219 19:21:27 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:01.219 19:21:27 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3369761 00:06:01.219 19:21:27 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 3369761 ']' 00:06:01.219 19:21:27 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 3369761 00:06:01.219 19:21:27 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:01.219 19:21:27 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:01.219 19:21:27 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3369761 00:06:01.219 19:21:27 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:01.219 19:21:27 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:01.219 19:21:27 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3369761' 00:06:01.219 killing process with pid 3369761 00:06:01.219 19:21:27 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 3369761 00:06:01.219 19:21:27 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 3369761 00:06:01.479 00:06:01.479 real 0m1.538s 00:06:01.479 user 0m2.961s 00:06:01.479 sys 0m0.433s 00:06:01.479 19:21:27 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.479 19:21:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.479 ************************************ 00:06:01.479 END TEST spdkcli_tcp 00:06:01.479 ************************************ 00:06:01.479 19:21:27 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.479 19:21:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:01.479 19:21:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.479 19:21:27 -- common/autotest_common.sh@10 -- # set +x 00:06:01.479 ************************************ 00:06:01.479 START TEST dpdk_mem_utility 00:06:01.479 ************************************ 00:06:01.479 19:21:27 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.740 * Looking for test storage... 00:06:01.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:01.740 19:21:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:01.740 19:21:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3370168 00:06:01.740 19:21:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3370168 00:06:01.740 19:21:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.740 19:21:27 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 3370168 ']' 00:06:01.740 19:21:27 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.740 19:21:27 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:01.740 19:21:27 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.740 19:21:27 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:01.740 19:21:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.740 [2024-05-15 19:21:27.773748] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:01.740 [2024-05-15 19:21:27.773818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3370168 ] 00:06:01.740 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.740 [2024-05-15 19:21:27.864239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.000 [2024-05-15 19:21:27.935155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.569 19:21:28 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:02.570 19:21:28 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:02.570 19:21:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:02.570 19:21:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:02.570 19:21:28 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.570 19:21:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:02.570 { 00:06:02.570 "filename": "/tmp/spdk_mem_dump.txt" 00:06:02.570 } 00:06:02.570 19:21:28 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.570 19:21:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:02.570 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:02.570 1 heaps totaling size 814.000000 MiB 00:06:02.570 size: 814.000000 MiB heap id: 0 00:06:02.570 end heaps---------- 00:06:02.570 8 mempools totaling size 598.116089 MiB 00:06:02.570 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:02.570 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:02.570 size: 84.521057 MiB name: bdev_io_3370168 00:06:02.570 size: 51.011292 MiB name: evtpool_3370168 00:06:02.570 size: 50.003479 MiB name: msgpool_3370168 00:06:02.570 size: 21.763794 MiB name: PDU_Pool 00:06:02.570 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:02.570 size: 0.026123 MiB name: Session_Pool 00:06:02.570 end mempools------- 00:06:02.570 6 memzones totaling size 4.142822 MiB 00:06:02.570 size: 1.000366 MiB name: RG_ring_0_3370168 00:06:02.570 size: 1.000366 MiB name: RG_ring_1_3370168 00:06:02.570 size: 1.000366 MiB name: RG_ring_4_3370168 00:06:02.570 size: 1.000366 MiB name: RG_ring_5_3370168 00:06:02.570 size: 0.125366 MiB name: RG_ring_2_3370168 00:06:02.570 size: 0.015991 MiB name: RG_ring_3_3370168 00:06:02.570 end memzones------- 00:06:02.570 19:21:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:02.830 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:02.830 list of free elements. size: 12.519348 MiB 00:06:02.830 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:02.830 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:02.830 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:02.830 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:02.830 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:02.830 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:02.830 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:02.830 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:02.830 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:02.830 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:02.830 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:02.830 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:02.830 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:02.830 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:02.830 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:02.830 list of standard malloc elements. size: 199.218079 MiB 00:06:02.830 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:02.830 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:02.830 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:02.830 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:02.830 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:02.830 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:02.830 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:02.830 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:02.830 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:02.830 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:02.830 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:02.830 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:02.830 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:02.830 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:02.830 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:02.830 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:02.830 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:02.830 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:02.830 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:02.830 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:02.830 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:02.830 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:02.830 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:02.830 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:02.830 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:02.830 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:02.830 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:02.830 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:02.830 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:02.830 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:02.830 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:02.830 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:02.830 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:02.830 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:02.830 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:02.830 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:02.830 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:02.830 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:02.830 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:02.830 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:02.830 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:02.830 list of memzone associated elements. size: 602.262573 MiB 00:06:02.830 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:02.830 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:02.830 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:02.830 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:02.830 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:02.830 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3370168_0 00:06:02.830 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:02.830 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3370168_0 00:06:02.830 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:02.830 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3370168_0 00:06:02.830 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:02.830 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:02.830 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:02.830 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:02.830 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:02.830 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3370168 00:06:02.830 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:02.830 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3370168 00:06:02.830 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:02.830 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3370168 00:06:02.830 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:02.830 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:02.830 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:02.830 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:02.830 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:02.830 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:02.830 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:02.830 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:02.830 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:02.830 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3370168 00:06:02.830 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:02.830 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3370168 00:06:02.830 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:02.830 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3370168 00:06:02.830 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:02.830 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3370168 00:06:02.830 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:02.830 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3370168 00:06:02.830 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:02.830 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:02.830 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:02.830 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:02.830 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:02.830 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:02.830 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:02.830 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3370168 00:06:02.830 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:02.830 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:02.830 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:02.830 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:02.830 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:02.830 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3370168 00:06:02.830 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:02.830 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:02.830 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:02.830 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3370168 00:06:02.830 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:02.830 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3370168 00:06:02.830 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:02.830 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:02.830 19:21:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:02.831 19:21:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3370168 00:06:02.831 19:21:28 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 3370168 ']' 00:06:02.831 19:21:28 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 3370168 00:06:02.831 19:21:28 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:02.831 19:21:28 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:02.831 19:21:28 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3370168 00:06:02.831 19:21:28 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:02.831 19:21:28 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:02.831 19:21:28 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3370168' 00:06:02.831 killing process with pid 3370168 00:06:02.831 19:21:28 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 3370168 00:06:02.831 19:21:28 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 3370168 00:06:03.091 00:06:03.091 real 0m1.416s 00:06:03.091 user 0m1.601s 00:06:03.091 sys 0m0.391s 00:06:03.091 19:21:29 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.091 19:21:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.091 ************************************ 00:06:03.091 END TEST dpdk_mem_utility 00:06:03.091 ************************************ 00:06:03.091 19:21:29 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:03.091 19:21:29 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:03.091 19:21:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.091 19:21:29 -- common/autotest_common.sh@10 -- # set +x 00:06:03.091 ************************************ 00:06:03.091 START TEST event 00:06:03.091 ************************************ 00:06:03.091 19:21:29 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:03.091 * Looking for test storage... 00:06:03.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:03.091 19:21:29 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:03.091 19:21:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:03.091 19:21:29 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.091 19:21:29 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:03.091 19:21:29 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.091 19:21:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.091 ************************************ 00:06:03.091 START TEST event_perf 00:06:03.091 ************************************ 00:06:03.091 19:21:29 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.091 Running I/O for 1 seconds...[2024-05-15 19:21:29.273520] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:03.091 [2024-05-15 19:21:29.273607] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3370557 ] 00:06:03.350 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.350 [2024-05-15 19:21:29.361838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.350 [2024-05-15 19:21:29.433970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.350 [2024-05-15 19:21:29.434107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.350 [2024-05-15 19:21:29.434498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.350 [2024-05-15 19:21:29.434500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.737 Running I/O for 1 seconds... 00:06:04.737 lcore 0: 172267 00:06:04.737 lcore 1: 172269 00:06:04.737 lcore 2: 172267 00:06:04.737 lcore 3: 172269 00:06:04.737 done. 00:06:04.737 00:06:04.737 real 0m1.237s 00:06:04.737 user 0m4.137s 00:06:04.737 sys 0m0.098s 00:06:04.737 19:21:30 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.737 19:21:30 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.737 ************************************ 00:06:04.737 END TEST event_perf 00:06:04.737 ************************************ 00:06:04.737 19:21:30 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:04.737 19:21:30 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:04.737 19:21:30 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.737 19:21:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.737 ************************************ 00:06:04.737 START TEST event_reactor 00:06:04.737 ************************************ 00:06:04.737 19:21:30 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:04.737 [2024-05-15 19:21:30.591321] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:04.737 [2024-05-15 19:21:30.591421] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3370892 ] 00:06:04.737 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.737 [2024-05-15 19:21:30.679258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.737 [2024-05-15 19:21:30.749793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.676 test_start 00:06:05.676 oneshot 00:06:05.676 tick 100 00:06:05.676 tick 100 00:06:05.676 tick 250 00:06:05.676 tick 100 00:06:05.676 tick 100 00:06:05.676 tick 250 00:06:05.676 tick 100 00:06:05.676 tick 500 00:06:05.676 tick 100 00:06:05.676 tick 100 00:06:05.676 tick 250 00:06:05.676 tick 100 00:06:05.676 tick 100 00:06:05.676 test_end 00:06:05.676 00:06:05.676 real 0m1.230s 00:06:05.676 user 0m1.138s 00:06:05.676 sys 0m0.088s 00:06:05.676 19:21:31 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.676 19:21:31 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:05.676 ************************************ 00:06:05.676 END TEST event_reactor 00:06:05.676 ************************************ 00:06:05.676 19:21:31 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.676 19:21:31 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:05.676 19:21:31 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.676 19:21:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.935 ************************************ 00:06:05.935 START TEST event_reactor_perf 00:06:05.935 ************************************ 00:06:05.935 19:21:31 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.935 [2024-05-15 19:21:31.902828] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:05.935 [2024-05-15 19:21:31.902925] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3371044 ] 00:06:05.936 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.936 [2024-05-15 19:21:31.990799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.936 [2024-05-15 19:21:32.061749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.321 test_start 00:06:07.321 test_end 00:06:07.321 Performance: 364854 events per second 00:06:07.321 00:06:07.321 real 0m1.231s 00:06:07.321 user 0m1.134s 00:06:07.321 sys 0m0.092s 00:06:07.321 19:21:33 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.321 19:21:33 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.321 ************************************ 00:06:07.321 END TEST event_reactor_perf 00:06:07.321 ************************************ 00:06:07.321 19:21:33 event -- event/event.sh@49 -- # uname -s 00:06:07.321 19:21:33 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:07.321 19:21:33 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:07.321 19:21:33 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.321 19:21:33 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.321 19:21:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.321 ************************************ 00:06:07.321 START TEST event_scheduler 00:06:07.321 ************************************ 00:06:07.321 19:21:33 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:07.321 * Looking for test storage... 00:06:07.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:07.321 19:21:33 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:07.321 19:21:33 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3371328 00:06:07.321 19:21:33 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.321 19:21:33 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3371328 00:06:07.321 19:21:33 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:07.321 19:21:33 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 3371328 ']' 00:06:07.321 19:21:33 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.321 19:21:33 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.321 19:21:33 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.321 19:21:33 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.321 19:21:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.321 [2024-05-15 19:21:33.345792] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:07.321 [2024-05-15 19:21:33.345858] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3371328 ] 00:06:07.321 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.322 [2024-05-15 19:21:33.410681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.322 [2024-05-15 19:21:33.479409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.322 [2024-05-15 19:21:33.479630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.322 [2024-05-15 19:21:33.479631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.322 [2024-05-15 19:21:33.479445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.582 19:21:33 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:07.582 19:21:33 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:07.582 19:21:33 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:07.582 19:21:33 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.582 19:21:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.582 POWER: Env isn't set yet! 00:06:07.582 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:07.582 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:07.582 POWER: Cannot set governor of lcore 0 to userspace 00:06:07.582 POWER: Attempting to initialise PSTAT power management... 00:06:07.582 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:07.582 POWER: Initialized successfully for lcore 0 power management 00:06:07.582 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:07.582 POWER: Initialized successfully for lcore 1 power management 00:06:07.582 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:07.582 POWER: Initialized successfully for lcore 2 power management 00:06:07.582 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:07.582 POWER: Initialized successfully for lcore 3 power management 00:06:07.582 19:21:33 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.582 19:21:33 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:07.582 19:21:33 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.582 19:21:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.582 [2024-05-15 19:21:33.644733] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:07.582 19:21:33 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.582 19:21:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:07.582 19:21:33 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.582 19:21:33 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.582 19:21:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.582 ************************************ 00:06:07.582 START TEST scheduler_create_thread 00:06:07.582 ************************************ 00:06:07.582 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:07.582 19:21:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:07.582 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.582 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.582 2 00:06:07.582 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.582 19:21:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:07.582 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.582 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.582 3 00:06:07.582 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.582 19:21:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:07.582 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.582 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.582 4 00:06:07.582 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.582 19:21:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:07.582 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.582 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.582 5 00:06:07.582 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.582 19:21:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:07.582 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.582 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.843 6 00:06:07.843 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.843 19:21:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:07.843 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.843 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.843 7 00:06:07.843 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.843 19:21:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:07.843 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.843 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.843 8 00:06:07.843 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.843 19:21:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:07.843 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.843 19:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.248 9 00:06:09.248 19:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.248 19:21:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:09.248 19:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.248 19:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.814 10 00:06:09.814 19:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.814 19:21:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:09.814 19:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.814 19:21:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.756 19:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.756 19:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:10.756 19:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:10.756 19:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.756 19:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.326 19:21:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.326 19:21:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:11.326 19:21:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.326 19:21:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.897 19:21:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.897 19:21:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:11.897 19:21:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:11.897 19:21:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.897 19:21:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.466 19:21:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.466 00:06:12.466 real 0m4.724s 00:06:12.466 user 0m0.023s 00:06:12.466 sys 0m0.008s 00:06:12.466 19:21:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.466 19:21:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.466 ************************************ 00:06:12.466 END TEST scheduler_create_thread 00:06:12.466 ************************************ 00:06:12.466 19:21:38 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:12.466 19:21:38 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3371328 00:06:12.466 19:21:38 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 3371328 ']' 00:06:12.466 19:21:38 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 3371328 00:06:12.466 19:21:38 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:12.466 19:21:38 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:12.466 19:21:38 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3371328 00:06:12.466 19:21:38 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:12.466 19:21:38 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:12.466 19:21:38 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3371328' 00:06:12.466 killing process with pid 3371328 00:06:12.466 19:21:38 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 3371328 00:06:12.466 19:21:38 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 3371328 00:06:12.466 [2024-05-15 19:21:38.566206] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:12.727 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:12.727 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:12.727 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:12.727 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:12.727 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:12.727 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:12.727 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:12.727 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:12.727 00:06:12.727 real 0m5.572s 00:06:12.727 user 0m12.855s 00:06:12.727 sys 0m0.343s 00:06:12.727 19:21:38 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.727 19:21:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.727 ************************************ 00:06:12.727 END TEST event_scheduler 00:06:12.727 ************************************ 00:06:12.727 19:21:38 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:12.727 19:21:38 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:12.727 19:21:38 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.727 19:21:38 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.727 19:21:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.727 ************************************ 00:06:12.727 START TEST app_repeat 00:06:12.727 ************************************ 00:06:12.727 19:21:38 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:12.727 19:21:38 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.727 19:21:38 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.727 19:21:38 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:12.727 19:21:38 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.727 19:21:38 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:12.727 19:21:38 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:12.727 19:21:38 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:12.727 19:21:38 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3372596 00:06:12.727 19:21:38 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.727 19:21:38 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3372596' 00:06:12.727 Process app_repeat pid: 3372596 00:06:12.727 19:21:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:12.727 19:21:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:12.727 spdk_app_start Round 0 00:06:12.727 19:21:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3372596 /var/tmp/spdk-nbd.sock 00:06:12.727 19:21:38 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3372596 ']' 00:06:12.727 19:21:38 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.727 19:21:38 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:12.727 19:21:38 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.727 19:21:38 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:12.727 19:21:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.727 19:21:38 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:12.727 [2024-05-15 19:21:38.890611] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:12.727 [2024-05-15 19:21:38.890670] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3372596 ] 00:06:12.987 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.987 [2024-05-15 19:21:38.976390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.987 [2024-05-15 19:21:39.042308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.987 [2024-05-15 19:21:39.042319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.987 19:21:39 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:12.987 19:21:39 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:12.987 19:21:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.247 Malloc0 00:06:13.247 19:21:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.506 Malloc1 00:06:13.506 19:21:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.506 19:21:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.506 19:21:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.506 19:21:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.506 19:21:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.506 19:21:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.506 19:21:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.506 19:21:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.506 19:21:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.506 19:21:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.506 19:21:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.506 19:21:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.506 19:21:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:13.506 19:21:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.506 19:21:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.506 19:21:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.766 /dev/nbd0 00:06:13.766 19:21:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.766 19:21:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.766 19:21:39 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:13.766 19:21:39 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:13.766 19:21:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:13.766 19:21:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:13.766 19:21:39 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:13.766 19:21:39 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:13.766 19:21:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:13.766 19:21:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:13.766 19:21:39 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.766 1+0 records in 00:06:13.766 1+0 records out 00:06:13.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289768 s, 14.1 MB/s 00:06:13.766 19:21:39 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.766 19:21:39 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:13.766 19:21:39 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.766 19:21:39 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:13.766 19:21:39 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:13.766 19:21:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.766 19:21:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.766 19:21:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.026 /dev/nbd1 00:06:14.026 19:21:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.026 19:21:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.026 19:21:39 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:14.026 19:21:39 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:14.026 19:21:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:14.026 19:21:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:14.026 19:21:39 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:14.026 19:21:39 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:14.026 19:21:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:14.026 19:21:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:14.026 19:21:39 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.026 1+0 records in 00:06:14.026 1+0 records out 00:06:14.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239679 s, 17.1 MB/s 00:06:14.026 19:21:39 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.026 19:21:40 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:14.026 19:21:40 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.026 19:21:40 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:14.026 19:21:40 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:14.026 19:21:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.026 19:21:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.026 19:21:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.026 19:21:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.026 19:21:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.286 { 00:06:14.286 "nbd_device": "/dev/nbd0", 00:06:14.286 "bdev_name": "Malloc0" 00:06:14.286 }, 00:06:14.286 { 00:06:14.286 "nbd_device": "/dev/nbd1", 00:06:14.286 "bdev_name": "Malloc1" 00:06:14.286 } 00:06:14.286 ]' 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.286 { 00:06:14.286 "nbd_device": "/dev/nbd0", 00:06:14.286 "bdev_name": "Malloc0" 00:06:14.286 }, 00:06:14.286 { 00:06:14.286 "nbd_device": "/dev/nbd1", 00:06:14.286 "bdev_name": "Malloc1" 00:06:14.286 } 00:06:14.286 ]' 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.286 /dev/nbd1' 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.286 /dev/nbd1' 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.286 256+0 records in 00:06:14.286 256+0 records out 00:06:14.286 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011903 s, 88.1 MB/s 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.286 256+0 records in 00:06:14.286 256+0 records out 00:06:14.286 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163358 s, 64.2 MB/s 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.286 256+0 records in 00:06:14.286 256+0 records out 00:06:14.286 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181285 s, 57.8 MB/s 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.286 19:21:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.287 19:21:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.287 19:21:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.287 19:21:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.287 19:21:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.287 19:21:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.287 19:21:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.287 19:21:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.287 19:21:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.287 19:21:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.287 19:21:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.287 19:21:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.287 19:21:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.287 19:21:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.287 19:21:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:14.287 19:21:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.287 19:21:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.547 19:21:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.547 19:21:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.547 19:21:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.547 19:21:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.547 19:21:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.547 19:21:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.547 19:21:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.547 19:21:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.547 19:21:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.547 19:21:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.807 19:21:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.807 19:21:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.807 19:21:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.807 19:21:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.807 19:21:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.807 19:21:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.807 19:21:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.807 19:21:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.807 19:21:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.807 19:21:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.807 19:21:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.066 19:21:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.066 19:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.066 19:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.066 19:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.066 19:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.066 19:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.066 19:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:15.066 19:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.066 19:21:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.066 19:21:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.066 19:21:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.066 19:21:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.066 19:21:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.397 19:21:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:15.397 [2024-05-15 19:21:41.405839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.397 [2024-05-15 19:21:41.470321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.397 [2024-05-15 19:21:41.470323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.397 [2024-05-15 19:21:41.502452] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.397 [2024-05-15 19:21:41.502486] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.747 19:21:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:18.747 19:21:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:18.747 spdk_app_start Round 1 00:06:18.747 19:21:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3372596 /var/tmp/spdk-nbd.sock 00:06:18.747 19:21:44 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3372596 ']' 00:06:18.747 19:21:44 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.747 19:21:44 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:18.747 19:21:44 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.747 19:21:44 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:18.747 19:21:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.747 19:21:44 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:18.747 19:21:44 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:18.747 19:21:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.747 Malloc0 00:06:18.747 19:21:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.747 Malloc1 00:06:18.747 19:21:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.747 19:21:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.747 19:21:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.747 19:21:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.747 19:21:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.747 19:21:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.747 19:21:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.747 19:21:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.747 19:21:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.747 19:21:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.747 19:21:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.747 19:21:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.747 19:21:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:18.747 19:21:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.747 19:21:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.747 19:21:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.008 /dev/nbd0 00:06:19.008 19:21:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.008 19:21:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.008 19:21:45 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:19.008 19:21:45 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:19.008 19:21:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:19.008 19:21:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:19.008 19:21:45 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:19.008 19:21:45 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:19.008 19:21:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:19.008 19:21:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:19.008 19:21:45 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.008 1+0 records in 00:06:19.008 1+0 records out 00:06:19.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208455 s, 19.6 MB/s 00:06:19.008 19:21:45 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.008 19:21:45 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:19.008 19:21:45 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.008 19:21:45 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:19.008 19:21:45 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:19.008 19:21:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.008 19:21:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.008 19:21:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.268 /dev/nbd1 00:06:19.268 19:21:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.268 19:21:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.268 19:21:45 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:19.268 19:21:45 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:19.268 19:21:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:19.268 19:21:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:19.268 19:21:45 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:19.268 19:21:45 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:19.268 19:21:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:19.268 19:21:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:19.268 19:21:45 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.268 1+0 records in 00:06:19.268 1+0 records out 00:06:19.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301916 s, 13.6 MB/s 00:06:19.268 19:21:45 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.268 19:21:45 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:19.268 19:21:45 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.268 19:21:45 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:19.269 19:21:45 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:19.269 19:21:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.269 19:21:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.269 19:21:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.269 19:21:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.269 19:21:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.530 { 00:06:19.530 "nbd_device": "/dev/nbd0", 00:06:19.530 "bdev_name": "Malloc0" 00:06:19.530 }, 00:06:19.530 { 00:06:19.530 "nbd_device": "/dev/nbd1", 00:06:19.530 "bdev_name": "Malloc1" 00:06:19.530 } 00:06:19.530 ]' 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.530 { 00:06:19.530 "nbd_device": "/dev/nbd0", 00:06:19.530 "bdev_name": "Malloc0" 00:06:19.530 }, 00:06:19.530 { 00:06:19.530 "nbd_device": "/dev/nbd1", 00:06:19.530 "bdev_name": "Malloc1" 00:06:19.530 } 00:06:19.530 ]' 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.530 /dev/nbd1' 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.530 /dev/nbd1' 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.530 256+0 records in 00:06:19.530 256+0 records out 00:06:19.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124942 s, 83.9 MB/s 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.530 256+0 records in 00:06:19.530 256+0 records out 00:06:19.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209393 s, 50.1 MB/s 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.530 256+0 records in 00:06:19.530 256+0 records out 00:06:19.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168508 s, 62.2 MB/s 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.530 19:21:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.790 19:21:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.790 19:21:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.790 19:21:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.790 19:21:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.790 19:21:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.790 19:21:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.790 19:21:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.790 19:21:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.790 19:21:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.790 19:21:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.050 19:21:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.051 19:21:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.051 19:21:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.051 19:21:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.051 19:21:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.051 19:21:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.051 19:21:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.051 19:21:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.051 19:21:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.051 19:21:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.051 19:21:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.311 19:21:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.311 19:21:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.311 19:21:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.311 19:21:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.311 19:21:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.311 19:21:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.311 19:21:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:20.311 19:21:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.311 19:21:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.311 19:21:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.311 19:21:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.311 19:21:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.311 19:21:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:20.572 19:21:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:20.833 [2024-05-15 19:21:46.764490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.833 [2024-05-15 19:21:46.828324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.833 [2024-05-15 19:21:46.828326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.833 [2024-05-15 19:21:46.861150] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:20.833 [2024-05-15 19:21:46.861184] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.131 19:21:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:24.131 19:21:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:24.131 spdk_app_start Round 2 00:06:24.131 19:21:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3372596 /var/tmp/spdk-nbd.sock 00:06:24.131 19:21:49 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3372596 ']' 00:06:24.131 19:21:49 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.131 19:21:49 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:24.131 19:21:49 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.131 19:21:49 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:24.131 19:21:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.131 19:21:49 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:24.131 19:21:49 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:24.131 19:21:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.131 Malloc0 00:06:24.131 19:21:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.131 Malloc1 00:06:24.131 19:21:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.131 19:21:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.131 19:21:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.131 19:21:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.131 19:21:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.131 19:21:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.131 19:21:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.131 19:21:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.131 19:21:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.131 19:21:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.131 19:21:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.131 19:21:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.131 19:21:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:24.131 19:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.131 19:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.131 19:21:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:24.391 /dev/nbd0 00:06:24.391 19:21:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.391 19:21:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.391 19:21:50 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:24.391 19:21:50 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:24.391 19:21:50 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:24.391 19:21:50 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:24.391 19:21:50 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:24.391 19:21:50 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:24.391 19:21:50 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:24.391 19:21:50 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:24.391 19:21:50 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.391 1+0 records in 00:06:24.391 1+0 records out 00:06:24.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274159 s, 14.9 MB/s 00:06:24.392 19:21:50 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.392 19:21:50 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:24.392 19:21:50 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.392 19:21:50 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:24.392 19:21:50 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:24.392 19:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.392 19:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.392 19:21:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.653 /dev/nbd1 00:06:24.653 19:21:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:24.653 19:21:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:24.653 19:21:50 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:24.653 19:21:50 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:24.653 19:21:50 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:24.653 19:21:50 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:24.653 19:21:50 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:24.653 19:21:50 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:24.653 19:21:50 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:24.653 19:21:50 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:24.653 19:21:50 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.653 1+0 records in 00:06:24.653 1+0 records out 00:06:24.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200414 s, 20.4 MB/s 00:06:24.653 19:21:50 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.653 19:21:50 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:24.653 19:21:50 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.653 19:21:50 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:24.653 19:21:50 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:24.653 19:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.653 19:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.653 19:21:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.653 19:21:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.653 19:21:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.914 19:21:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.914 { 00:06:24.914 "nbd_device": "/dev/nbd0", 00:06:24.914 "bdev_name": "Malloc0" 00:06:24.914 }, 00:06:24.914 { 00:06:24.914 "nbd_device": "/dev/nbd1", 00:06:24.914 "bdev_name": "Malloc1" 00:06:24.914 } 00:06:24.914 ]' 00:06:24.914 19:21:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.914 { 00:06:24.914 "nbd_device": "/dev/nbd0", 00:06:24.914 "bdev_name": "Malloc0" 00:06:24.914 }, 00:06:24.914 { 00:06:24.914 "nbd_device": "/dev/nbd1", 00:06:24.914 "bdev_name": "Malloc1" 00:06:24.914 } 00:06:24.914 ]' 00:06:24.914 19:21:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.914 19:21:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.914 /dev/nbd1' 00:06:24.914 19:21:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.914 /dev/nbd1' 00:06:24.914 19:21:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.914 19:21:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.914 19:21:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.914 19:21:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.914 19:21:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.914 19:21:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.914 19:21:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.914 19:21:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.914 19:21:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.914 19:21:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.914 19:21:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.914 19:21:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.914 256+0 records in 00:06:24.914 256+0 records out 00:06:24.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011879 s, 88.3 MB/s 00:06:24.914 19:21:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.914 19:21:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.914 256+0 records in 00:06:24.914 256+0 records out 00:06:24.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01599 s, 65.6 MB/s 00:06:24.914 19:21:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.914 19:21:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.914 256+0 records in 00:06:24.914 256+0 records out 00:06:24.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016768 s, 62.5 MB/s 00:06:24.914 19:21:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.914 19:21:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.914 19:21:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.914 19:21:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.914 19:21:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.914 19:21:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.914 19:21:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.914 19:21:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.914 19:21:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.914 19:21:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.914 19:21:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.914 19:21:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.914 19:21:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.914 19:21:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.914 19:21:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.914 19:21:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.914 19:21:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:24.914 19:21:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.914 19:21:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.175 19:21:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.175 19:21:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.175 19:21:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.175 19:21:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.175 19:21:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.175 19:21:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.175 19:21:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.175 19:21:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.175 19:21:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.175 19:21:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.437 19:21:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.437 19:21:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.437 19:21:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.437 19:21:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.437 19:21:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.437 19:21:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:25.437 19:21:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.437 19:21:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.437 19:21:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.437 19:21:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.437 19:21:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.697 19:21:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.697 19:21:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.697 19:21:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.697 19:21:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.697 19:21:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.697 19:21:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.697 19:21:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:25.697 19:21:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.697 19:21:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.697 19:21:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:25.697 19:21:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:25.697 19:21:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:25.697 19:21:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:25.957 19:21:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:25.957 [2024-05-15 19:21:52.080178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.218 [2024-05-15 19:21:52.144848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.218 [2024-05-15 19:21:52.144854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.218 [2024-05-15 19:21:52.176893] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:26.218 [2024-05-15 19:21:52.176929] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.775 19:21:54 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3372596 /var/tmp/spdk-nbd.sock 00:06:28.775 19:21:54 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3372596 ']' 00:06:28.775 19:21:54 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.775 19:21:54 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.775 19:21:54 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.775 19:21:54 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.775 19:21:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.035 19:21:55 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.035 19:21:55 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:29.035 19:21:55 event.app_repeat -- event/event.sh@39 -- # killprocess 3372596 00:06:29.035 19:21:55 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 3372596 ']' 00:06:29.035 19:21:55 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 3372596 00:06:29.035 19:21:55 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:29.035 19:21:55 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:29.035 19:21:55 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3372596 00:06:29.035 19:21:55 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:29.035 19:21:55 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:29.035 19:21:55 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3372596' 00:06:29.035 killing process with pid 3372596 00:06:29.035 19:21:55 event.app_repeat -- common/autotest_common.sh@965 -- # kill 3372596 00:06:29.035 19:21:55 event.app_repeat -- common/autotest_common.sh@970 -- # wait 3372596 00:06:29.296 spdk_app_start is called in Round 0. 00:06:29.296 Shutdown signal received, stop current app iteration 00:06:29.296 Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 reinitialization... 00:06:29.296 spdk_app_start is called in Round 1. 00:06:29.296 Shutdown signal received, stop current app iteration 00:06:29.296 Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 reinitialization... 00:06:29.296 spdk_app_start is called in Round 2. 00:06:29.296 Shutdown signal received, stop current app iteration 00:06:29.296 Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 reinitialization... 00:06:29.296 spdk_app_start is called in Round 3. 00:06:29.296 Shutdown signal received, stop current app iteration 00:06:29.296 19:21:55 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:29.296 19:21:55 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:29.296 00:06:29.296 real 0m16.461s 00:06:29.296 user 0m36.295s 00:06:29.296 sys 0m2.405s 00:06:29.296 19:21:55 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.296 19:21:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.296 ************************************ 00:06:29.296 END TEST app_repeat 00:06:29.296 ************************************ 00:06:29.296 19:21:55 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:29.296 19:21:55 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:29.296 19:21:55 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:29.296 19:21:55 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.296 19:21:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.296 ************************************ 00:06:29.296 START TEST cpu_locks 00:06:29.296 ************************************ 00:06:29.296 19:21:55 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:29.556 * Looking for test storage... 00:06:29.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:29.557 19:21:55 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:29.557 19:21:55 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:29.557 19:21:55 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:29.557 19:21:55 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:29.557 19:21:55 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:29.557 19:21:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.557 19:21:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.557 ************************************ 00:06:29.557 START TEST default_locks 00:06:29.557 ************************************ 00:06:29.557 19:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:29.557 19:21:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3376061 00:06:29.557 19:21:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3376061 00:06:29.557 19:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3376061 ']' 00:06:29.557 19:21:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.557 19:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.557 19:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:29.557 19:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.557 19:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:29.557 19:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.557 [2024-05-15 19:21:55.611771] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:29.557 [2024-05-15 19:21:55.611831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3376061 ] 00:06:29.557 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.557 [2024-05-15 19:21:55.699888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.817 [2024-05-15 19:21:55.770913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.386 19:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:30.386 19:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:30.386 19:21:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3376061 00:06:30.386 19:21:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3376061 00:06:30.386 19:21:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.957 lslocks: write error 00:06:30.957 19:21:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3376061 00:06:30.957 19:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 3376061 ']' 00:06:30.957 19:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 3376061 00:06:30.957 19:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:30.957 19:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:30.957 19:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3376061 00:06:30.957 19:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:30.957 19:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:30.957 19:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3376061' 00:06:30.957 killing process with pid 3376061 00:06:30.957 19:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 3376061 00:06:30.957 19:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 3376061 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3376061 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3376061 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3376061 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3376061 ']' 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3376061) - No such process 00:06:30.957 ERROR: process (pid: 3376061) is no longer running 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:30.957 00:06:30.957 real 0m1.578s 00:06:30.957 user 0m1.728s 00:06:30.957 sys 0m0.533s 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.957 19:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.957 ************************************ 00:06:30.957 END TEST default_locks 00:06:30.957 ************************************ 00:06:31.218 19:21:57 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:31.218 19:21:57 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:31.218 19:21:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.218 19:21:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.218 ************************************ 00:06:31.218 START TEST default_locks_via_rpc 00:06:31.218 ************************************ 00:06:31.218 19:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:31.218 19:21:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3376413 00:06:31.218 19:21:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3376413 00:06:31.218 19:21:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.218 19:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3376413 ']' 00:06:31.218 19:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.218 19:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:31.218 19:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.218 19:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:31.218 19:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.218 [2024-05-15 19:21:57.252763] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:31.218 [2024-05-15 19:21:57.252811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3376413 ] 00:06:31.218 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.218 [2024-05-15 19:21:57.335863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.478 [2024-05-15 19:21:57.403518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.049 19:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.049 19:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:32.049 19:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:32.049 19:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.049 19:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.049 19:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.049 19:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:32.049 19:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:32.049 19:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:32.049 19:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:32.049 19:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:32.049 19:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.049 19:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.049 19:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.049 19:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3376413 00:06:32.049 19:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3376413 00:06:32.049 19:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.310 19:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3376413 00:06:32.310 19:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 3376413 ']' 00:06:32.310 19:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 3376413 00:06:32.310 19:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:32.310 19:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:32.310 19:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3376413 00:06:32.310 19:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:32.310 19:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:32.310 19:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3376413' 00:06:32.310 killing process with pid 3376413 00:06:32.310 19:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 3376413 00:06:32.310 19:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 3376413 00:06:32.571 00:06:32.571 real 0m1.362s 00:06:32.571 user 0m1.506s 00:06:32.571 sys 0m0.431s 00:06:32.571 19:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.571 19:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.571 ************************************ 00:06:32.571 END TEST default_locks_via_rpc 00:06:32.571 ************************************ 00:06:32.571 19:21:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:32.571 19:21:58 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:32.571 19:21:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.571 19:21:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.571 ************************************ 00:06:32.571 START TEST non_locking_app_on_locked_coremask 00:06:32.571 ************************************ 00:06:32.571 19:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:32.571 19:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3376706 00:06:32.571 19:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3376706 /var/tmp/spdk.sock 00:06:32.571 19:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.571 19:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3376706 ']' 00:06:32.571 19:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.571 19:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:32.571 19:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.571 19:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:32.571 19:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.571 [2024-05-15 19:21:58.694768] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:32.571 [2024-05-15 19:21:58.694825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3376706 ] 00:06:32.571 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.831 [2024-05-15 19:21:58.781526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.831 [2024-05-15 19:21:58.850957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.401 19:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.401 19:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:33.401 19:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3377034 00:06:33.401 19:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:33.401 19:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3377034 /var/tmp/spdk2.sock 00:06:33.401 19:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3377034 ']' 00:06:33.401 19:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.401 19:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:33.401 19:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.401 19:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:33.401 19:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.401 [2024-05-15 19:21:59.585940] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:33.401 [2024-05-15 19:21:59.585990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3377034 ] 00:06:33.662 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.662 [2024-05-15 19:21:59.683578] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.662 [2024-05-15 19:21:59.683605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.662 [2024-05-15 19:21:59.812655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.604 19:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:34.604 19:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:34.604 19:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3376706 00:06:34.604 19:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3376706 00:06:34.604 19:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.864 lslocks: write error 00:06:34.864 19:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3376706 00:06:34.864 19:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3376706 ']' 00:06:34.864 19:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3376706 00:06:34.864 19:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:34.864 19:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:34.864 19:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3376706 00:06:34.864 19:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:34.864 19:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:34.864 19:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3376706' 00:06:34.864 killing process with pid 3376706 00:06:34.864 19:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3376706 00:06:34.864 19:22:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3376706 00:06:35.435 19:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3377034 00:06:35.435 19:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3377034 ']' 00:06:35.435 19:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3377034 00:06:35.435 19:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:35.435 19:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:35.435 19:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3377034 00:06:35.435 19:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:35.435 19:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:35.435 19:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3377034' 00:06:35.435 killing process with pid 3377034 00:06:35.435 19:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3377034 00:06:35.435 19:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3377034 00:06:35.696 00:06:35.696 real 0m3.028s 00:06:35.696 user 0m3.437s 00:06:35.696 sys 0m0.858s 00:06:35.696 19:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.696 19:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.696 ************************************ 00:06:35.696 END TEST non_locking_app_on_locked_coremask 00:06:35.696 ************************************ 00:06:35.696 19:22:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:35.696 19:22:01 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:35.696 19:22:01 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.696 19:22:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.696 ************************************ 00:06:35.696 START TEST locking_app_on_unlocked_coremask 00:06:35.696 ************************************ 00:06:35.696 19:22:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:35.696 19:22:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3377413 00:06:35.696 19:22:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3377413 /var/tmp/spdk.sock 00:06:35.696 19:22:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:35.696 19:22:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3377413 ']' 00:06:35.696 19:22:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.696 19:22:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:35.696 19:22:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.696 19:22:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:35.696 19:22:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.696 [2024-05-15 19:22:01.812441] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:35.696 [2024-05-15 19:22:01.812488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3377413 ] 00:06:35.696 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.956 [2024-05-15 19:22:01.895488] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.956 [2024-05-15 19:22:01.895517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.956 [2024-05-15 19:22:01.958539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.526 19:22:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:36.526 19:22:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:36.526 19:22:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3377693 00:06:36.526 19:22:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3377693 /var/tmp/spdk2.sock 00:06:36.526 19:22:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3377693 ']' 00:06:36.526 19:22:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.526 19:22:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.526 19:22:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:36.526 19:22:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.526 19:22:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:36.526 19:22:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.526 [2024-05-15 19:22:02.710214] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:36.526 [2024-05-15 19:22:02.710268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3377693 ] 00:06:36.786 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.786 [2024-05-15 19:22:02.827007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.786 [2024-05-15 19:22:02.960583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.726 19:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:37.726 19:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:37.726 19:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3377693 00:06:37.726 19:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3377693 00:06:37.726 19:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.987 lslocks: write error 00:06:37.987 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3377413 00:06:37.987 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3377413 ']' 00:06:37.987 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3377413 00:06:37.987 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:37.987 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:37.987 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3377413 00:06:37.987 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:37.987 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:37.987 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3377413' 00:06:37.987 killing process with pid 3377413 00:06:37.987 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3377413 00:06:37.987 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3377413 00:06:38.558 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3377693 00:06:38.558 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3377693 ']' 00:06:38.558 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3377693 00:06:38.558 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:38.558 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:38.558 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3377693 00:06:38.558 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:38.558 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:38.558 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3377693' 00:06:38.558 killing process with pid 3377693 00:06:38.558 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3377693 00:06:38.558 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3377693 00:06:38.819 00:06:38.819 real 0m3.083s 00:06:38.819 user 0m3.519s 00:06:38.819 sys 0m0.880s 00:06:38.819 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.819 19:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.819 ************************************ 00:06:38.819 END TEST locking_app_on_unlocked_coremask 00:06:38.819 ************************************ 00:06:38.819 19:22:04 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:38.819 19:22:04 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:38.819 19:22:04 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.819 19:22:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.819 ************************************ 00:06:38.819 START TEST locking_app_on_locked_coremask 00:06:38.819 ************************************ 00:06:38.819 19:22:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:38.819 19:22:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3378119 00:06:38.819 19:22:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3378119 /var/tmp/spdk.sock 00:06:38.819 19:22:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3378119 ']' 00:06:38.819 19:22:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.819 19:22:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.819 19:22:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.819 19:22:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.819 19:22:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.819 19:22:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.819 [2024-05-15 19:22:04.963443] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:38.819 [2024-05-15 19:22:04.963503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3378119 ] 00:06:38.819 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.080 [2024-05-15 19:22:05.048336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.080 [2024-05-15 19:22:05.114246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.650 19:22:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.651 19:22:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:39.651 19:22:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:39.651 19:22:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3378298 00:06:39.651 19:22:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3378298 /var/tmp/spdk2.sock 00:06:39.651 19:22:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:39.651 19:22:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3378298 /var/tmp/spdk2.sock 00:06:39.651 19:22:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:39.651 19:22:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.651 19:22:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:39.651 19:22:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.651 19:22:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3378298 /var/tmp/spdk2.sock 00:06:39.651 19:22:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3378298 ']' 00:06:39.651 19:22:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.651 19:22:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.651 19:22:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.651 19:22:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.651 19:22:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.912 [2024-05-15 19:22:05.853255] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:39.912 [2024-05-15 19:22:05.853306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3378298 ] 00:06:39.912 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.912 [2024-05-15 19:22:05.952168] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3378119 has claimed it. 00:06:39.912 [2024-05-15 19:22:05.952208] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:40.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3378298) - No such process 00:06:40.482 ERROR: process (pid: 3378298) is no longer running 00:06:40.482 19:22:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:40.482 19:22:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:40.482 19:22:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:40.482 19:22:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.482 19:22:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:40.482 19:22:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.482 19:22:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3378119 00:06:40.482 19:22:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3378119 00:06:40.482 19:22:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.053 lslocks: write error 00:06:41.053 19:22:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3378119 00:06:41.053 19:22:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3378119 ']' 00:06:41.053 19:22:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3378119 00:06:41.053 19:22:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:41.053 19:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:41.053 19:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3378119 00:06:41.053 19:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:41.053 19:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:41.053 19:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3378119' 00:06:41.053 killing process with pid 3378119 00:06:41.053 19:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3378119 00:06:41.053 19:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3378119 00:06:41.313 00:06:41.313 real 0m2.357s 00:06:41.313 user 0m2.693s 00:06:41.313 sys 0m0.626s 00:06:41.313 19:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.313 19:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.313 ************************************ 00:06:41.313 END TEST locking_app_on_locked_coremask 00:06:41.313 ************************************ 00:06:41.313 19:22:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:41.313 19:22:07 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:41.313 19:22:07 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.313 19:22:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.313 ************************************ 00:06:41.313 START TEST locking_overlapped_coremask 00:06:41.313 ************************************ 00:06:41.313 19:22:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:41.313 19:22:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3378579 00:06:41.313 19:22:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3378579 /var/tmp/spdk.sock 00:06:41.313 19:22:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3378579 ']' 00:06:41.313 19:22:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:41.313 19:22:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.313 19:22:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:41.313 19:22:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.313 19:22:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:41.313 19:22:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.313 [2024-05-15 19:22:07.401657] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:41.313 [2024-05-15 19:22:07.401708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3378579 ] 00:06:41.313 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.313 [2024-05-15 19:22:07.487092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.573 [2024-05-15 19:22:07.557005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.573 [2024-05-15 19:22:07.557137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.573 [2024-05-15 19:22:07.557141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.143 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:42.143 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:42.143 19:22:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3378839 00:06:42.143 19:22:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3378839 /var/tmp/spdk2.sock 00:06:42.143 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:42.143 19:22:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:42.143 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3378839 /var/tmp/spdk2.sock 00:06:42.143 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:42.143 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.143 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:42.143 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.143 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3378839 /var/tmp/spdk2.sock 00:06:42.143 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3378839 ']' 00:06:42.143 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.143 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.143 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.143 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.143 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.143 [2024-05-15 19:22:08.312023] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:42.143 [2024-05-15 19:22:08.312078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3378839 ] 00:06:42.403 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.403 [2024-05-15 19:22:08.392168] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3378579 has claimed it. 00:06:42.403 [2024-05-15 19:22:08.392197] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:42.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3378839) - No such process 00:06:42.974 ERROR: process (pid: 3378839) is no longer running 00:06:42.974 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:42.974 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:42.974 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:42.974 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:42.974 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:42.974 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:42.974 19:22:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:42.974 19:22:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:42.974 19:22:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:42.974 19:22:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:42.974 19:22:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3378579 00:06:42.974 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 3378579 ']' 00:06:42.974 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 3378579 00:06:42.974 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:42.974 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:42.974 19:22:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3378579 00:06:42.974 19:22:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:42.974 19:22:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:42.974 19:22:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3378579' 00:06:42.974 killing process with pid 3378579 00:06:42.974 19:22:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 3378579 00:06:42.974 19:22:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 3378579 00:06:43.236 00:06:43.236 real 0m1.878s 00:06:43.236 user 0m5.374s 00:06:43.236 sys 0m0.410s 00:06:43.236 19:22:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.236 19:22:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.236 ************************************ 00:06:43.236 END TEST locking_overlapped_coremask 00:06:43.236 ************************************ 00:06:43.236 19:22:09 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:43.236 19:22:09 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:43.236 19:22:09 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.236 19:22:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.236 ************************************ 00:06:43.236 START TEST locking_overlapped_coremask_via_rpc 00:06:43.236 ************************************ 00:06:43.236 19:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:43.236 19:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3379077 00:06:43.236 19:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3379077 /var/tmp/spdk.sock 00:06:43.236 19:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3379077 ']' 00:06:43.236 19:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:43.236 19:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.236 19:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:43.236 19:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.236 19:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:43.236 19:22:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.236 [2024-05-15 19:22:09.366774] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:43.236 [2024-05-15 19:22:09.366849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3379077 ] 00:06:43.236 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.504 [2024-05-15 19:22:09.451941] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.504 [2024-05-15 19:22:09.451971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.504 [2024-05-15 19:22:09.525041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.504 [2024-05-15 19:22:09.525176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.504 [2024-05-15 19:22:09.525180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.102 19:22:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:44.102 19:22:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:44.102 19:22:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3379217 00:06:44.102 19:22:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3379217 /var/tmp/spdk2.sock 00:06:44.103 19:22:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3379217 ']' 00:06:44.103 19:22:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:44.103 19:22:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.103 19:22:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:44.103 19:22:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.103 19:22:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:44.103 19:22:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.363 [2024-05-15 19:22:10.288543] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:44.363 [2024-05-15 19:22:10.288596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3379217 ] 00:06:44.363 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.363 [2024-05-15 19:22:10.368060] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.363 [2024-05-15 19:22:10.368081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.363 [2024-05-15 19:22:10.473644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.363 [2024-05-15 19:22:10.477353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.363 [2024-05-15 19:22:10.477355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.304 [2024-05-15 19:22:11.165372] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3379077 has claimed it. 00:06:45.304 request: 00:06:45.304 { 00:06:45.304 "method": "framework_enable_cpumask_locks", 00:06:45.304 "req_id": 1 00:06:45.304 } 00:06:45.304 Got JSON-RPC error response 00:06:45.304 response: 00:06:45.304 { 00:06:45.304 "code": -32603, 00:06:45.304 "message": "Failed to claim CPU core: 2" 00:06:45.304 } 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3379077 /var/tmp/spdk.sock 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3379077 ']' 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3379217 /var/tmp/spdk2.sock 00:06:45.304 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3379217 ']' 00:06:45.305 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.305 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:45.305 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.305 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:45.305 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.565 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:45.565 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:45.565 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:45.565 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:45.565 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:45.565 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:45.565 00:06:45.565 real 0m2.301s 00:06:45.565 user 0m1.029s 00:06:45.565 sys 0m0.194s 00:06:45.565 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.565 19:22:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.565 ************************************ 00:06:45.566 END TEST locking_overlapped_coremask_via_rpc 00:06:45.566 ************************************ 00:06:45.566 19:22:11 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:45.566 19:22:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3379077 ]] 00:06:45.566 19:22:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3379077 00:06:45.566 19:22:11 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3379077 ']' 00:06:45.566 19:22:11 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3379077 00:06:45.566 19:22:11 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:45.566 19:22:11 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:45.566 19:22:11 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3379077 00:06:45.566 19:22:11 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:45.566 19:22:11 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:45.566 19:22:11 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3379077' 00:06:45.566 killing process with pid 3379077 00:06:45.566 19:22:11 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3379077 00:06:45.566 19:22:11 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3379077 00:06:45.826 19:22:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3379217 ]] 00:06:45.826 19:22:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3379217 00:06:45.826 19:22:11 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3379217 ']' 00:06:45.826 19:22:11 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3379217 00:06:45.826 19:22:11 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:45.826 19:22:11 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:45.826 19:22:11 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3379217 00:06:45.826 19:22:11 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:45.826 19:22:11 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:45.826 19:22:11 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3379217' 00:06:45.826 killing process with pid 3379217 00:06:45.826 19:22:11 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3379217 00:06:45.826 19:22:11 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3379217 00:06:46.086 19:22:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:46.086 19:22:12 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:46.086 19:22:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3379077 ]] 00:06:46.086 19:22:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3379077 00:06:46.086 19:22:12 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3379077 ']' 00:06:46.086 19:22:12 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3379077 00:06:46.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3379077) - No such process 00:06:46.086 19:22:12 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3379077 is not found' 00:06:46.086 Process with pid 3379077 is not found 00:06:46.086 19:22:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3379217 ]] 00:06:46.086 19:22:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3379217 00:06:46.086 19:22:12 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3379217 ']' 00:06:46.086 19:22:12 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3379217 00:06:46.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3379217) - No such process 00:06:46.086 19:22:12 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3379217 is not found' 00:06:46.086 Process with pid 3379217 is not found 00:06:46.086 19:22:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:46.086 00:06:46.086 real 0m16.765s 00:06:46.086 user 0m30.002s 00:06:46.086 sys 0m4.868s 00:06:46.086 19:22:12 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.086 19:22:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.086 ************************************ 00:06:46.086 END TEST cpu_locks 00:06:46.086 ************************************ 00:06:46.086 00:06:46.086 real 0m43.089s 00:06:46.086 user 1m25.759s 00:06:46.086 sys 0m8.298s 00:06:46.087 19:22:12 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.087 19:22:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.087 ************************************ 00:06:46.087 END TEST event 00:06:46.087 ************************************ 00:06:46.087 19:22:12 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:46.087 19:22:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:46.087 19:22:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.087 19:22:12 -- common/autotest_common.sh@10 -- # set +x 00:06:46.347 ************************************ 00:06:46.347 START TEST thread 00:06:46.347 ************************************ 00:06:46.347 19:22:12 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:46.347 * Looking for test storage... 00:06:46.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:46.347 19:22:12 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:46.347 19:22:12 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:46.347 19:22:12 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.347 19:22:12 thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.347 ************************************ 00:06:46.347 START TEST thread_poller_perf 00:06:46.347 ************************************ 00:06:46.347 19:22:12 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:46.347 [2024-05-15 19:22:12.452990] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:46.347 [2024-05-15 19:22:12.453101] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3379717 ] 00:06:46.347 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.607 [2024-05-15 19:22:12.546622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.607 [2024-05-15 19:22:12.619893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.607 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:47.547 ====================================== 00:06:47.547 busy:2407463392 (cyc) 00:06:47.547 total_run_count: 286000 00:06:47.547 tsc_hz: 2400000000 (cyc) 00:06:47.547 ====================================== 00:06:47.547 poller_cost: 8417 (cyc), 3507 (nsec) 00:06:47.547 00:06:47.547 real 0m1.250s 00:06:47.547 user 0m1.145s 00:06:47.547 sys 0m0.100s 00:06:47.547 19:22:13 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.547 19:22:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:47.547 ************************************ 00:06:47.547 END TEST thread_poller_perf 00:06:47.547 ************************************ 00:06:47.547 19:22:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:47.547 19:22:13 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:47.547 19:22:13 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.547 19:22:13 thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.808 ************************************ 00:06:47.808 START TEST thread_poller_perf 00:06:47.808 ************************************ 00:06:47.809 19:22:13 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:47.809 [2024-05-15 19:22:13.785270] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:47.809 [2024-05-15 19:22:13.785384] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3380008 ] 00:06:47.809 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.809 [2024-05-15 19:22:13.880253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.809 [2024-05-15 19:22:13.944987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.809 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:49.193 ====================================== 00:06:49.193 busy:2401883924 (cyc) 00:06:49.193 total_run_count: 3812000 00:06:49.193 tsc_hz: 2400000000 (cyc) 00:06:49.193 ====================================== 00:06:49.193 poller_cost: 630 (cyc), 262 (nsec) 00:06:49.193 00:06:49.193 real 0m1.235s 00:06:49.193 user 0m1.139s 00:06:49.193 sys 0m0.092s 00:06:49.193 19:22:14 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.193 19:22:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:49.193 ************************************ 00:06:49.193 END TEST thread_poller_perf 00:06:49.193 ************************************ 00:06:49.193 19:22:15 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:49.193 00:06:49.193 real 0m2.754s 00:06:49.193 user 0m2.381s 00:06:49.193 sys 0m0.374s 00:06:49.193 19:22:15 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.193 19:22:15 thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.193 ************************************ 00:06:49.193 END TEST thread 00:06:49.193 ************************************ 00:06:49.193 19:22:15 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:49.193 19:22:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:49.193 19:22:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.193 19:22:15 -- common/autotest_common.sh@10 -- # set +x 00:06:49.193 ************************************ 00:06:49.193 START TEST accel 00:06:49.193 ************************************ 00:06:49.193 19:22:15 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:49.193 * Looking for test storage... 00:06:49.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:49.193 19:22:15 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:49.193 19:22:15 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:49.193 19:22:15 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:49.193 19:22:15 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3380401 00:06:49.193 19:22:15 accel -- accel/accel.sh@63 -- # waitforlisten 3380401 00:06:49.193 19:22:15 accel -- common/autotest_common.sh@827 -- # '[' -z 3380401 ']' 00:06:49.193 19:22:15 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.193 19:22:15 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:49.193 19:22:15 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.193 19:22:15 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:49.193 19:22:15 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:49.193 19:22:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.193 19:22:15 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:49.193 19:22:15 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.193 19:22:15 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.193 19:22:15 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.193 19:22:15 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.193 19:22:15 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.193 19:22:15 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:49.193 19:22:15 accel -- accel/accel.sh@41 -- # jq -r . 00:06:49.193 [2024-05-15 19:22:15.284423] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:49.193 [2024-05-15 19:22:15.284489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3380401 ] 00:06:49.193 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.193 [2024-05-15 19:22:15.371905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.454 [2024-05-15 19:22:15.443413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.025 19:22:16 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:50.025 19:22:16 accel -- common/autotest_common.sh@860 -- # return 0 00:06:50.025 19:22:16 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:50.025 19:22:16 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:50.025 19:22:16 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:50.025 19:22:16 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:50.025 19:22:16 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:50.025 19:22:16 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:50.025 19:22:16 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:50.025 19:22:16 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.025 19:22:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.025 19:22:16 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.025 19:22:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.025 19:22:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.025 19:22:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.025 19:22:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.025 19:22:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.025 19:22:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.025 19:22:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.025 19:22:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.025 19:22:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.025 19:22:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.025 19:22:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.025 19:22:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.025 19:22:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.025 19:22:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.025 19:22:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.025 19:22:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.025 19:22:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.025 19:22:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.025 19:22:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.025 19:22:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.025 19:22:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.025 19:22:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.025 19:22:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.025 19:22:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.025 19:22:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.025 19:22:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.025 19:22:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.025 19:22:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.025 19:22:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.025 19:22:16 accel -- accel/accel.sh@75 -- # killprocess 3380401 00:06:50.025 19:22:16 accel -- common/autotest_common.sh@946 -- # '[' -z 3380401 ']' 00:06:50.025 19:22:16 accel -- common/autotest_common.sh@950 -- # kill -0 3380401 00:06:50.025 19:22:16 accel -- common/autotest_common.sh@951 -- # uname 00:06:50.025 19:22:16 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:50.025 19:22:16 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3380401 00:06:50.285 19:22:16 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:50.285 19:22:16 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:50.285 19:22:16 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3380401' 00:06:50.285 killing process with pid 3380401 00:06:50.285 19:22:16 accel -- common/autotest_common.sh@965 -- # kill 3380401 00:06:50.285 19:22:16 accel -- common/autotest_common.sh@970 -- # wait 3380401 00:06:50.285 19:22:16 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:50.285 19:22:16 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:50.285 19:22:16 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:50.285 19:22:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.285 19:22:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.546 19:22:16 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:50.546 19:22:16 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:50.546 19:22:16 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:50.546 19:22:16 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.546 19:22:16 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.546 19:22:16 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.546 19:22:16 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.546 19:22:16 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.546 19:22:16 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:50.546 19:22:16 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:50.546 19:22:16 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.546 19:22:16 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:50.546 19:22:16 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:50.546 19:22:16 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:50.546 19:22:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.546 19:22:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.546 ************************************ 00:06:50.546 START TEST accel_missing_filename 00:06:50.546 ************************************ 00:06:50.546 19:22:16 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:50.546 19:22:16 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:50.546 19:22:16 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:50.546 19:22:16 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:50.546 19:22:16 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.546 19:22:16 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:50.546 19:22:16 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.546 19:22:16 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:50.546 19:22:16 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:50.546 19:22:16 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:50.546 19:22:16 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.546 19:22:16 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.546 19:22:16 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.546 19:22:16 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.546 19:22:16 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.546 19:22:16 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:50.546 19:22:16 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:50.546 [2024-05-15 19:22:16.635196] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:50.546 [2024-05-15 19:22:16.635272] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3380769 ] 00:06:50.546 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.546 [2024-05-15 19:22:16.722854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.806 [2024-05-15 19:22:16.801663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.806 [2024-05-15 19:22:16.834558] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.806 [2024-05-15 19:22:16.872106] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:50.806 A filename is required. 00:06:50.806 19:22:16 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:50.806 19:22:16 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.806 19:22:16 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:50.806 19:22:16 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:50.806 19:22:16 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:50.806 19:22:16 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.806 00:06:50.806 real 0m0.323s 00:06:50.806 user 0m0.236s 00:06:50.806 sys 0m0.127s 00:06:50.806 19:22:16 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.806 19:22:16 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:50.806 ************************************ 00:06:50.806 END TEST accel_missing_filename 00:06:50.806 ************************************ 00:06:50.806 19:22:16 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.806 19:22:16 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:50.806 19:22:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.806 19:22:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.066 ************************************ 00:06:51.066 START TEST accel_compress_verify 00:06:51.066 ************************************ 00:06:51.066 19:22:17 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:51.066 19:22:17 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:51.066 19:22:17 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:51.066 19:22:17 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:51.066 19:22:17 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.066 19:22:17 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:51.066 19:22:17 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.066 19:22:17 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:51.066 19:22:17 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:51.066 19:22:17 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:51.066 19:22:17 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.066 19:22:17 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.066 19:22:17 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.066 19:22:17 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.066 19:22:17 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.066 19:22:17 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:51.066 19:22:17 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:51.066 [2024-05-15 19:22:17.036123] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:51.066 [2024-05-15 19:22:17.036186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3380794 ] 00:06:51.066 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.066 [2024-05-15 19:22:17.122347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.066 [2024-05-15 19:22:17.198372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.066 [2024-05-15 19:22:17.230914] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.327 [2024-05-15 19:22:17.268364] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:51.327 00:06:51.327 Compression does not support the verify option, aborting. 00:06:51.327 19:22:17 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:51.327 19:22:17 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.327 19:22:17 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:51.327 19:22:17 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:51.327 19:22:17 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:51.327 19:22:17 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.327 00:06:51.327 real 0m0.317s 00:06:51.327 user 0m0.228s 00:06:51.327 sys 0m0.130s 00:06:51.327 19:22:17 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.327 19:22:17 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:51.327 ************************************ 00:06:51.327 END TEST accel_compress_verify 00:06:51.327 ************************************ 00:06:51.327 19:22:17 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:51.327 19:22:17 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:51.327 19:22:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.327 19:22:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.327 ************************************ 00:06:51.327 START TEST accel_wrong_workload 00:06:51.327 ************************************ 00:06:51.327 19:22:17 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:51.327 19:22:17 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:51.327 19:22:17 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:51.327 19:22:17 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:51.327 19:22:17 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.327 19:22:17 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:51.327 19:22:17 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.327 19:22:17 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:51.327 19:22:17 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:51.327 19:22:17 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:51.327 19:22:17 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.327 19:22:17 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.327 19:22:17 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.327 19:22:17 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.327 19:22:17 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.327 19:22:17 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:51.327 19:22:17 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:51.327 Unsupported workload type: foobar 00:06:51.327 [2024-05-15 19:22:17.434646] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:51.327 accel_perf options: 00:06:51.327 [-h help message] 00:06:51.327 [-q queue depth per core] 00:06:51.328 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:51.328 [-T number of threads per core 00:06:51.328 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:51.328 [-t time in seconds] 00:06:51.328 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:51.328 [ dif_verify, , dif_generate, dif_generate_copy 00:06:51.328 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:51.328 [-l for compress/decompress workloads, name of uncompressed input file 00:06:51.328 [-S for crc32c workload, use this seed value (default 0) 00:06:51.328 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:51.328 [-f for fill workload, use this BYTE value (default 255) 00:06:51.328 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:51.328 [-y verify result if this switch is on] 00:06:51.328 [-a tasks to allocate per core (default: same value as -q)] 00:06:51.328 Can be used to spread operations across a wider range of memory. 00:06:51.328 19:22:17 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:51.328 19:22:17 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.328 19:22:17 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:51.328 19:22:17 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.328 00:06:51.328 real 0m0.036s 00:06:51.328 user 0m0.018s 00:06:51.328 sys 0m0.017s 00:06:51.328 19:22:17 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.328 19:22:17 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:51.328 ************************************ 00:06:51.328 END TEST accel_wrong_workload 00:06:51.328 ************************************ 00:06:51.328 Error: writing output failed: Broken pipe 00:06:51.328 19:22:17 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:51.328 19:22:17 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:51.328 19:22:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.328 19:22:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.589 ************************************ 00:06:51.589 START TEST accel_negative_buffers 00:06:51.589 ************************************ 00:06:51.589 19:22:17 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:51.589 19:22:17 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:51.589 19:22:17 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:51.589 19:22:17 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:51.589 19:22:17 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.589 19:22:17 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:51.589 19:22:17 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.589 19:22:17 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:51.589 19:22:17 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:51.589 19:22:17 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:51.589 19:22:17 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.589 19:22:17 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.589 19:22:17 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.589 19:22:17 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.589 19:22:17 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.589 19:22:17 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:51.589 19:22:17 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:51.589 -x option must be non-negative. 00:06:51.589 [2024-05-15 19:22:17.552304] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:51.589 accel_perf options: 00:06:51.589 [-h help message] 00:06:51.589 [-q queue depth per core] 00:06:51.589 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:51.589 [-T number of threads per core 00:06:51.589 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:51.589 [-t time in seconds] 00:06:51.589 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:51.589 [ dif_verify, , dif_generate, dif_generate_copy 00:06:51.589 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:51.589 [-l for compress/decompress workloads, name of uncompressed input file 00:06:51.589 [-S for crc32c workload, use this seed value (default 0) 00:06:51.589 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:51.589 [-f for fill workload, use this BYTE value (default 255) 00:06:51.589 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:51.589 [-y verify result if this switch is on] 00:06:51.589 [-a tasks to allocate per core (default: same value as -q)] 00:06:51.589 Can be used to spread operations across a wider range of memory. 00:06:51.589 19:22:17 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:51.589 19:22:17 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.589 19:22:17 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:51.589 19:22:17 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.589 00:06:51.589 real 0m0.036s 00:06:51.589 user 0m0.018s 00:06:51.589 sys 0m0.018s 00:06:51.589 19:22:17 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.589 19:22:17 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:51.589 ************************************ 00:06:51.589 END TEST accel_negative_buffers 00:06:51.589 ************************************ 00:06:51.589 Error: writing output failed: Broken pipe 00:06:51.589 19:22:17 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:51.589 19:22:17 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:51.589 19:22:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.589 19:22:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.589 ************************************ 00:06:51.589 START TEST accel_crc32c 00:06:51.589 ************************************ 00:06:51.589 19:22:17 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:51.589 19:22:17 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:51.589 19:22:17 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:51.589 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.589 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.589 19:22:17 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:51.589 19:22:17 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:51.589 19:22:17 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:51.589 19:22:17 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.589 19:22:17 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.589 19:22:17 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.589 19:22:17 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.589 19:22:17 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.589 19:22:17 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:51.589 19:22:17 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:51.589 [2024-05-15 19:22:17.670416] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:51.589 [2024-05-15 19:22:17.670495] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3380984 ] 00:06:51.589 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.589 [2024-05-15 19:22:17.759288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.851 [2024-05-15 19:22:17.837513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.851 19:22:17 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.852 19:22:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:52.792 19:22:18 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.792 00:06:52.792 real 0m1.324s 00:06:52.792 user 0m1.199s 00:06:52.792 sys 0m0.136s 00:06:52.792 19:22:18 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.792 19:22:18 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:52.792 ************************************ 00:06:52.792 END TEST accel_crc32c 00:06:52.792 ************************************ 00:06:53.052 19:22:19 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:53.052 19:22:19 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:53.052 19:22:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.052 19:22:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.052 ************************************ 00:06:53.052 START TEST accel_crc32c_C2 00:06:53.052 ************************************ 00:06:53.052 19:22:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:53.052 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.052 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:53.052 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.052 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.052 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:53.052 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:53.052 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.052 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.052 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.052 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.052 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.052 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.052 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:53.052 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:53.052 [2024-05-15 19:22:19.078781] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:53.052 [2024-05-15 19:22:19.078870] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3381215 ] 00:06:53.052 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.052 [2024-05-15 19:22:19.166429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.312 [2024-05-15 19:22:19.238889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.313 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.313 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.313 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.313 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:53.313 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.313 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.313 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.313 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.313 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.313 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.313 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.313 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.313 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.313 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.313 19:22:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.252 00:06:54.252 real 0m1.318s 00:06:54.252 user 0m1.201s 00:06:54.252 sys 0m0.127s 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.252 19:22:20 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:54.252 ************************************ 00:06:54.252 END TEST accel_crc32c_C2 00:06:54.252 ************************************ 00:06:54.252 19:22:20 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:54.252 19:22:20 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:54.252 19:22:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.252 19:22:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.512 ************************************ 00:06:54.512 START TEST accel_copy 00:06:54.513 ************************************ 00:06:54.513 19:22:20 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:54.513 [2024-05-15 19:22:20.481263] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:54.513 [2024-05-15 19:22:20.481359] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3381565 ] 00:06:54.513 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.513 [2024-05-15 19:22:20.579086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.513 [2024-05-15 19:22:20.646935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.513 19:22:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.896 19:22:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:55.897 19:22:21 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.897 00:06:55.897 real 0m1.323s 00:06:55.897 user 0m1.197s 00:06:55.897 sys 0m0.135s 00:06:55.897 19:22:21 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.897 19:22:21 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:55.897 ************************************ 00:06:55.897 END TEST accel_copy 00:06:55.897 ************************************ 00:06:55.897 19:22:21 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.897 19:22:21 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:55.897 19:22:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.897 19:22:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.897 ************************************ 00:06:55.897 START TEST accel_fill 00:06:55.897 ************************************ 00:06:55.897 19:22:21 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.897 19:22:21 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:55.897 19:22:21 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:55.897 19:22:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.897 19:22:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.897 19:22:21 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.897 19:22:21 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.897 19:22:21 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:55.897 19:22:21 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.897 19:22:21 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.897 19:22:21 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.897 19:22:21 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.897 19:22:21 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.897 19:22:21 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:55.897 19:22:21 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:55.897 [2024-05-15 19:22:21.882680] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:55.897 [2024-05-15 19:22:21.882751] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3381914 ] 00:06:55.897 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.897 [2024-05-15 19:22:21.968738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.897 [2024-05-15 19:22:22.042724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.897 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.157 19:22:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.158 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.158 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.158 19:22:22 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:56.158 19:22:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.158 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.158 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.158 19:22:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.158 19:22:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.158 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.158 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.158 19:22:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.158 19:22:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.158 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.158 19:22:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.097 19:22:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.098 19:22:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.098 19:22:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.098 19:22:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:57.098 19:22:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.098 00:06:57.098 real 0m1.316s 00:06:57.098 user 0m1.203s 00:06:57.098 sys 0m0.123s 00:06:57.098 19:22:23 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.098 19:22:23 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:57.098 ************************************ 00:06:57.098 END TEST accel_fill 00:06:57.098 ************************************ 00:06:57.098 19:22:23 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:57.098 19:22:23 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:57.098 19:22:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.098 19:22:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.098 ************************************ 00:06:57.098 START TEST accel_copy_crc32c 00:06:57.098 ************************************ 00:06:57.098 19:22:23 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:57.098 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:57.098 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:57.098 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.098 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.098 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:57.098 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:57.098 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:57.098 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.098 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.098 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.098 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.098 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.098 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:57.098 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:57.098 [2024-05-15 19:22:23.281349] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:57.098 [2024-05-15 19:22:23.281412] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3382263 ] 00:06:57.358 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.358 [2024-05-15 19:22:23.368394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.358 [2024-05-15 19:22:23.446319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.358 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.359 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.359 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.359 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.359 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.359 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.359 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.359 19:22:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.742 00:06:58.742 real 0m1.324s 00:06:58.742 user 0m1.203s 00:06:58.742 sys 0m0.132s 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.742 19:22:24 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:58.742 ************************************ 00:06:58.742 END TEST accel_copy_crc32c 00:06:58.742 ************************************ 00:06:58.742 19:22:24 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:58.742 19:22:24 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:58.742 19:22:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.742 19:22:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.742 ************************************ 00:06:58.742 START TEST accel_copy_crc32c_C2 00:06:58.742 ************************************ 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:58.742 [2024-05-15 19:22:24.686440] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:06:58.742 [2024-05-15 19:22:24.686502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3382542 ] 00:06:58.742 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.742 [2024-05-15 19:22:24.773630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.742 [2024-05-15 19:22:24.848123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.742 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.743 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.743 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.743 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.743 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.743 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.743 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.743 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.743 19:22:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.126 00:07:00.126 real 0m1.320s 00:07:00.126 user 0m1.211s 00:07:00.126 sys 0m0.120s 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.126 19:22:25 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:00.126 ************************************ 00:07:00.126 END TEST accel_copy_crc32c_C2 00:07:00.126 ************************************ 00:07:00.126 19:22:26 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:00.126 19:22:26 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:00.126 19:22:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.126 19:22:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.126 ************************************ 00:07:00.126 START TEST accel_dualcast 00:07:00.126 ************************************ 00:07:00.126 19:22:26 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:00.126 [2024-05-15 19:22:26.087935] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:07:00.126 [2024-05-15 19:22:26.088002] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3382733 ] 00:07:00.126 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.126 [2024-05-15 19:22:26.176414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.126 [2024-05-15 19:22:26.254696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.126 19:22:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:01.512 19:22:27 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.512 00:07:01.512 real 0m1.324s 00:07:01.512 user 0m1.212s 00:07:01.512 sys 0m0.122s 00:07:01.512 19:22:27 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.512 19:22:27 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:01.512 ************************************ 00:07:01.512 END TEST accel_dualcast 00:07:01.512 ************************************ 00:07:01.512 19:22:27 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:01.512 19:22:27 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:01.512 19:22:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.512 19:22:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.512 ************************************ 00:07:01.512 START TEST accel_compare 00:07:01.512 ************************************ 00:07:01.512 19:22:27 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:01.512 [2024-05-15 19:22:27.496986] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:07:01.512 [2024-05-15 19:22:27.497050] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3383007 ] 00:07:01.512 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.512 [2024-05-15 19:22:27.584662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.512 [2024-05-15 19:22:27.654455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.513 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.773 19:22:27 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.773 19:22:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.773 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.773 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.773 19:22:27 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:01.773 19:22:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.773 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.773 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.773 19:22:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.773 19:22:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.773 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.773 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.773 19:22:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.773 19:22:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.773 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.773 19:22:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.712 19:22:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:02.713 19:22:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.713 00:07:02.713 real 0m1.315s 00:07:02.713 user 0m1.211s 00:07:02.713 sys 0m0.114s 00:07:02.713 19:22:28 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.713 19:22:28 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:02.713 ************************************ 00:07:02.713 END TEST accel_compare 00:07:02.713 ************************************ 00:07:02.713 19:22:28 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:02.713 19:22:28 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:02.713 19:22:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.713 19:22:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.713 ************************************ 00:07:02.713 START TEST accel_xor 00:07:02.713 ************************************ 00:07:02.713 19:22:28 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:02.713 19:22:28 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:02.713 19:22:28 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:02.713 19:22:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.713 19:22:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.713 19:22:28 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:02.713 19:22:28 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:02.713 19:22:28 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:02.713 19:22:28 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.713 19:22:28 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.713 19:22:28 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.713 19:22:28 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.713 19:22:28 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.713 19:22:28 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:02.713 19:22:28 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:02.713 [2024-05-15 19:22:28.893464] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:07:02.713 [2024-05-15 19:22:28.893554] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3383359 ] 00:07:02.973 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.973 [2024-05-15 19:22:28.979823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.973 [2024-05-15 19:22:29.048386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.973 19:22:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.973 19:22:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.973 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.973 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.973 19:22:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.973 19:22:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.973 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.973 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.973 19:22:29 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:02.973 19:22:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.973 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.973 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.973 19:22:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.973 19:22:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.973 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.973 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.973 19:22:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.974 19:22:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.356 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.356 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.356 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.356 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.356 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.356 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.356 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.356 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.356 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.356 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.356 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.356 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.356 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.356 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.356 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.356 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.357 00:07:04.357 real 0m1.313s 00:07:04.357 user 0m1.200s 00:07:04.357 sys 0m0.123s 00:07:04.357 19:22:30 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.357 19:22:30 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:04.357 ************************************ 00:07:04.357 END TEST accel_xor 00:07:04.357 ************************************ 00:07:04.357 19:22:30 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:04.357 19:22:30 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:04.357 19:22:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.357 19:22:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.357 ************************************ 00:07:04.357 START TEST accel_xor 00:07:04.357 ************************************ 00:07:04.357 19:22:30 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:04.357 [2024-05-15 19:22:30.286304] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:07:04.357 [2024-05-15 19:22:30.286403] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3383706 ] 00:07:04.357 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.357 [2024-05-15 19:22:30.370460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.357 [2024-05-15 19:22:30.435506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.357 19:22:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:05.743 19:22:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.743 00:07:05.743 real 0m1.307s 00:07:05.743 user 0m1.200s 00:07:05.743 sys 0m0.116s 00:07:05.743 19:22:31 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.743 19:22:31 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:05.743 ************************************ 00:07:05.743 END TEST accel_xor 00:07:05.743 ************************************ 00:07:05.743 19:22:31 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:05.743 19:22:31 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:05.743 19:22:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.743 19:22:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.743 ************************************ 00:07:05.743 START TEST accel_dif_verify 00:07:05.743 ************************************ 00:07:05.743 19:22:31 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:05.743 19:22:31 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:05.743 19:22:31 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:05.743 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.743 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.743 19:22:31 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:05.743 19:22:31 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:05.743 19:22:31 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:05.743 19:22:31 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.743 19:22:31 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.743 19:22:31 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.743 19:22:31 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.743 19:22:31 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.743 19:22:31 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:05.743 19:22:31 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:05.744 [2024-05-15 19:22:31.677823] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:07:05.744 [2024-05-15 19:22:31.677931] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3384061 ] 00:07:05.744 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.744 [2024-05-15 19:22:31.777058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.744 [2024-05-15 19:22:31.854255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.744 19:22:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:07.129 19:22:32 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.129 00:07:07.129 real 0m1.337s 00:07:07.129 user 0m1.208s 00:07:07.129 sys 0m0.141s 00:07:07.129 19:22:32 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.129 19:22:32 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:07.129 ************************************ 00:07:07.129 END TEST accel_dif_verify 00:07:07.129 ************************************ 00:07:07.129 19:22:33 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:07.129 19:22:33 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:07.129 19:22:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.129 19:22:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.129 ************************************ 00:07:07.129 START TEST accel_dif_generate 00:07:07.129 ************************************ 00:07:07.129 19:22:33 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:07.129 [2024-05-15 19:22:33.095886] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:07:07.129 [2024-05-15 19:22:33.095948] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3384272 ] 00:07:07.129 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.129 [2024-05-15 19:22:33.184114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.129 [2024-05-15 19:22:33.262637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.129 19:22:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.130 19:22:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.130 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.130 19:22:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:08.513 19:22:34 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.513 00:07:08.513 real 0m1.327s 00:07:08.513 user 0m1.207s 00:07:08.513 sys 0m0.132s 00:07:08.513 19:22:34 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.513 19:22:34 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:08.513 ************************************ 00:07:08.513 END TEST accel_dif_generate 00:07:08.513 ************************************ 00:07:08.513 19:22:34 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:08.513 19:22:34 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:08.513 19:22:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.513 19:22:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.513 ************************************ 00:07:08.513 START TEST accel_dif_generate_copy 00:07:08.513 ************************************ 00:07:08.513 19:22:34 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:08.513 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:08.513 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:08.513 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.513 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.513 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:08.513 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:08.513 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:08.513 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.513 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.513 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.513 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.513 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.513 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:08.513 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:08.513 [2024-05-15 19:22:34.505728] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:07:08.513 [2024-05-15 19:22:34.505821] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3384479 ] 00:07:08.513 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.513 [2024-05-15 19:22:34.594788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.513 [2024-05-15 19:22:34.672751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.774 19:22:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.714 00:07:09.714 real 0m1.327s 00:07:09.714 user 0m1.215s 00:07:09.714 sys 0m0.122s 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.714 19:22:35 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:09.714 ************************************ 00:07:09.714 END TEST accel_dif_generate_copy 00:07:09.714 ************************************ 00:07:09.714 19:22:35 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:09.714 19:22:35 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.714 19:22:35 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:09.715 19:22:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.715 19:22:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.715 ************************************ 00:07:09.715 START TEST accel_comp 00:07:09.715 ************************************ 00:07:09.715 19:22:35 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.715 19:22:35 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:09.715 19:22:35 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:09.715 19:22:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.715 19:22:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.715 19:22:35 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.715 19:22:35 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.715 19:22:35 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:09.715 19:22:35 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.715 19:22:35 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.715 19:22:35 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.715 19:22:35 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.715 19:22:35 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.715 19:22:35 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:09.715 19:22:35 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:09.975 [2024-05-15 19:22:35.912182] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:07:09.975 [2024-05-15 19:22:35.912265] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3384802 ] 00:07:09.975 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.975 [2024-05-15 19:22:35.996754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.975 [2024-05-15 19:22:36.066498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.975 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.976 19:22:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:11.429 19:22:37 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.429 00:07:11.429 real 0m1.314s 00:07:11.429 user 0m1.210s 00:07:11.429 sys 0m0.116s 00:07:11.429 19:22:37 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.429 19:22:37 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:11.429 ************************************ 00:07:11.429 END TEST accel_comp 00:07:11.429 ************************************ 00:07:11.429 19:22:37 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:11.429 19:22:37 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:11.429 19:22:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.429 19:22:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.429 ************************************ 00:07:11.429 START TEST accel_decomp 00:07:11.429 ************************************ 00:07:11.429 19:22:37 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:11.429 [2024-05-15 19:22:37.308086] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:07:11.429 [2024-05-15 19:22:37.308146] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385154 ] 00:07:11.429 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.429 [2024-05-15 19:22:37.398554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.429 [2024-05-15 19:22:37.475062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.429 19:22:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 19:22:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.814 19:22:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 19:22:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 19:22:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 19:22:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.814 19:22:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 19:22:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 19:22:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 19:22:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.814 19:22:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 19:22:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 19:22:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 19:22:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.814 19:22:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 19:22:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 19:22:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.815 19:22:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.815 19:22:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.815 19:22:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.815 19:22:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.815 19:22:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.815 19:22:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.815 19:22:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.815 19:22:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.815 19:22:38 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.815 19:22:38 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:12.815 19:22:38 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.815 00:07:12.815 real 0m1.328s 00:07:12.815 user 0m1.211s 00:07:12.815 sys 0m0.128s 00:07:12.815 19:22:38 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.815 19:22:38 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:12.815 ************************************ 00:07:12.815 END TEST accel_decomp 00:07:12.815 ************************************ 00:07:12.815 19:22:38 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:12.815 19:22:38 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:12.815 19:22:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.815 19:22:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.815 ************************************ 00:07:12.815 START TEST accel_decmop_full 00:07:12.815 ************************************ 00:07:12.815 19:22:38 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:12.815 [2024-05-15 19:22:38.716789] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:07:12.815 [2024-05-15 19:22:38.716851] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385510 ] 00:07:12.815 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.815 [2024-05-15 19:22:38.803694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.815 [2024-05-15 19:22:38.880479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.815 19:22:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:14.200 19:22:40 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.200 00:07:14.200 real 0m1.341s 00:07:14.200 user 0m1.219s 00:07:14.200 sys 0m0.133s 00:07:14.200 19:22:40 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.200 19:22:40 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:14.200 ************************************ 00:07:14.200 END TEST accel_decmop_full 00:07:14.200 ************************************ 00:07:14.200 19:22:40 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:14.200 19:22:40 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:14.200 19:22:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.200 19:22:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.200 ************************************ 00:07:14.200 START TEST accel_decomp_mcore 00:07:14.200 ************************************ 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:14.201 [2024-05-15 19:22:40.140623] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:07:14.201 [2024-05-15 19:22:40.140686] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385841 ] 00:07:14.201 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.201 [2024-05-15 19:22:40.228753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.201 [2024-05-15 19:22:40.310303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.201 [2024-05-15 19:22:40.310447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.201 [2024-05-15 19:22:40.310683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.201 [2024-05-15 19:22:40.310685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.201 19:22:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.583 00:07:15.583 real 0m1.338s 00:07:15.583 user 0m4.448s 00:07:15.583 sys 0m0.137s 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.583 19:22:41 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:15.583 ************************************ 00:07:15.583 END TEST accel_decomp_mcore 00:07:15.583 ************************************ 00:07:15.583 19:22:41 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:15.583 19:22:41 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:15.583 19:22:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.583 19:22:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.583 ************************************ 00:07:15.583 START TEST accel_decomp_full_mcore 00:07:15.583 ************************************ 00:07:15.583 19:22:41 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:15.583 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:15.583 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:15.583 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.583 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.583 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:15.583 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:15.583 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:15.583 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.583 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.583 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.583 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.583 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.583 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:15.583 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:15.583 [2024-05-15 19:22:41.561498] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:07:15.583 [2024-05-15 19:22:41.561569] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3386046 ] 00:07:15.583 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.583 [2024-05-15 19:22:41.651083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:15.583 [2024-05-15 19:22:41.731682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.583 [2024-05-15 19:22:41.731807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.583 [2024-05-15 19:22:41.731975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.583 [2024-05-15 19:22:41.731975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.583 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.583 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.583 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.846 19:22:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.788 00:07:16.788 real 0m1.349s 00:07:16.788 user 0m4.487s 00:07:16.788 sys 0m0.140s 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.788 19:22:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:16.788 ************************************ 00:07:16.788 END TEST accel_decomp_full_mcore 00:07:16.788 ************************************ 00:07:16.788 19:22:42 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:16.788 19:22:42 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:16.788 19:22:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.788 19:22:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.788 ************************************ 00:07:16.788 START TEST accel_decomp_mthread 00:07:16.788 ************************************ 00:07:16.788 19:22:42 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:16.788 19:22:42 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:16.788 19:22:42 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:16.788 19:22:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.788 19:22:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.788 19:22:42 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:16.788 19:22:42 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:16.788 19:22:42 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:16.788 19:22:42 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.788 19:22:42 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.788 19:22:42 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.788 19:22:42 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.788 19:22:42 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.788 19:22:42 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:16.788 19:22:42 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:17.049 [2024-05-15 19:22:42.992521] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:07:17.049 [2024-05-15 19:22:42.992589] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3386272 ] 00:07:17.049 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.049 [2024-05-15 19:22:43.081484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.049 [2024-05-15 19:22:43.159773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:17.049 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.050 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.050 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.050 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.050 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.050 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.050 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.050 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.050 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.050 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.050 19:22:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.433 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.434 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.434 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.434 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.434 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.434 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.434 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.434 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.434 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:18.434 19:22:44 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.434 00:07:18.434 real 0m1.333s 00:07:18.434 user 0m1.216s 00:07:18.434 sys 0m0.129s 00:07:18.434 19:22:44 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:18.434 19:22:44 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:18.434 ************************************ 00:07:18.434 END TEST accel_decomp_mthread 00:07:18.434 ************************************ 00:07:18.434 19:22:44 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.434 19:22:44 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:18.434 19:22:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:18.434 19:22:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.434 ************************************ 00:07:18.434 START TEST accel_decomp_full_mthread 00:07:18.434 ************************************ 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:18.434 [2024-05-15 19:22:44.406261] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:07:18.434 [2024-05-15 19:22:44.406346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3386604 ] 00:07:18.434 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.434 [2024-05-15 19:22:44.503297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.434 [2024-05-15 19:22:44.572517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.434 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.435 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.435 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:18.710 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.710 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.710 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.711 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:18.711 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.711 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.711 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.711 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.711 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.711 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.711 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.711 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:18.711 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.711 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.711 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.711 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.711 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.711 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.711 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.711 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.711 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.711 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.711 19:22:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.655 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.655 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.655 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.655 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.655 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.655 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.655 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.655 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.655 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.655 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.655 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.655 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.655 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.655 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.655 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.655 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.655 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.656 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.656 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.656 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.656 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.656 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.656 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.656 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.656 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.656 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.656 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.656 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.656 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.656 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:19.656 19:22:45 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.656 00:07:19.656 real 0m1.355s 00:07:19.656 user 0m1.233s 00:07:19.656 sys 0m0.135s 00:07:19.656 19:22:45 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.656 19:22:45 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:19.656 ************************************ 00:07:19.656 END TEST accel_decomp_full_mthread 00:07:19.656 ************************************ 00:07:19.656 19:22:45 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:19.656 19:22:45 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:19.656 19:22:45 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:19.656 19:22:45 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:19.656 19:22:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.656 19:22:45 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.656 19:22:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.656 19:22:45 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.656 19:22:45 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.656 19:22:45 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.656 19:22:45 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.656 19:22:45 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:19.656 19:22:45 accel -- accel/accel.sh@41 -- # jq -r . 00:07:19.656 ************************************ 00:07:19.656 START TEST accel_dif_functional_tests 00:07:19.656 ************************************ 00:07:19.656 19:22:45 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:19.916 [2024-05-15 19:22:45.864900] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:07:19.916 [2024-05-15 19:22:45.864949] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3386960 ] 00:07:19.916 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.916 [2024-05-15 19:22:45.948487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:19.916 [2024-05-15 19:22:46.029114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.916 [2024-05-15 19:22:46.029241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.916 [2024-05-15 19:22:46.029244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.916 00:07:19.916 00:07:19.916 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.916 http://cunit.sourceforge.net/ 00:07:19.916 00:07:19.916 00:07:19.916 Suite: accel_dif 00:07:19.916 Test: verify: DIF generated, GUARD check ...passed 00:07:19.916 Test: verify: DIF generated, APPTAG check ...passed 00:07:19.916 Test: verify: DIF generated, REFTAG check ...passed 00:07:19.916 Test: verify: DIF not generated, GUARD check ...[2024-05-15 19:22:46.085769] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:19.916 [2024-05-15 19:22:46.085806] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:19.916 passed 00:07:19.916 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 19:22:46.085841] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:19.916 [2024-05-15 19:22:46.085855] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:19.916 passed 00:07:19.916 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 19:22:46.085873] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:19.916 [2024-05-15 19:22:46.085888] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:19.916 passed 00:07:19.916 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:19.916 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 19:22:46.085930] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:19.916 passed 00:07:19.916 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:19.916 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:19.916 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:19.916 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 19:22:46.086043] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:19.916 passed 00:07:19.916 Test: generate copy: DIF generated, GUARD check ...passed 00:07:19.916 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:19.916 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:19.916 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:19.916 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:19.916 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:19.916 Test: generate copy: iovecs-len validate ...[2024-05-15 19:22:46.086227] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:19.916 passed 00:07:19.916 Test: generate copy: buffer alignment validate ...passed 00:07:19.916 00:07:19.916 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.916 suites 1 1 n/a 0 0 00:07:19.916 tests 20 20 20 0 0 00:07:19.916 asserts 204 204 204 0 n/a 00:07:19.916 00:07:19.916 Elapsed time = 0.000 seconds 00:07:20.177 00:07:20.177 real 0m0.384s 00:07:20.177 user 0m0.455s 00:07:20.177 sys 0m0.152s 00:07:20.177 19:22:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.177 19:22:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:20.177 ************************************ 00:07:20.177 END TEST accel_dif_functional_tests 00:07:20.177 ************************************ 00:07:20.177 00:07:20.177 real 0m31.119s 00:07:20.177 user 0m34.055s 00:07:20.177 sys 0m4.807s 00:07:20.177 19:22:46 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.177 19:22:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.177 ************************************ 00:07:20.177 END TEST accel 00:07:20.177 ************************************ 00:07:20.177 19:22:46 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:20.177 19:22:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:20.177 19:22:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.177 19:22:46 -- common/autotest_common.sh@10 -- # set +x 00:07:20.177 ************************************ 00:07:20.177 START TEST accel_rpc 00:07:20.177 ************************************ 00:07:20.177 19:22:46 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:20.439 * Looking for test storage... 00:07:20.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:20.439 19:22:46 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:20.439 19:22:46 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3387051 00:07:20.439 19:22:46 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3387051 00:07:20.439 19:22:46 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 3387051 ']' 00:07:20.439 19:22:46 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:20.439 19:22:46 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.439 19:22:46 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:20.439 19:22:46 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.439 19:22:46 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:20.439 19:22:46 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.439 [2024-05-15 19:22:46.492758] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:07:20.439 [2024-05-15 19:22:46.492831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3387051 ] 00:07:20.439 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.439 [2024-05-15 19:22:46.579758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.700 [2024-05-15 19:22:46.651293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.304 19:22:47 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:21.304 19:22:47 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:21.304 19:22:47 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:21.304 19:22:47 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:21.304 19:22:47 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:21.304 19:22:47 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:21.304 19:22:47 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:21.304 19:22:47 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:21.304 19:22:47 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.304 19:22:47 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.304 ************************************ 00:07:21.304 START TEST accel_assign_opcode 00:07:21.304 ************************************ 00:07:21.304 19:22:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:21.304 19:22:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:21.304 19:22:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.304 19:22:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:21.304 [2024-05-15 19:22:47.389402] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:21.304 19:22:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.304 19:22:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:21.304 19:22:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.304 19:22:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:21.304 [2024-05-15 19:22:47.401429] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:21.304 19:22:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.304 19:22:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:21.304 19:22:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.304 19:22:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:21.564 19:22:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.564 19:22:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:21.564 19:22:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:21.564 19:22:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.564 19:22:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:21.564 19:22:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:21.564 19:22:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.564 software 00:07:21.564 00:07:21.564 real 0m0.219s 00:07:21.564 user 0m0.044s 00:07:21.564 sys 0m0.015s 00:07:21.564 19:22:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.564 19:22:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:21.564 ************************************ 00:07:21.564 END TEST accel_assign_opcode 00:07:21.564 ************************************ 00:07:21.564 19:22:47 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3387051 00:07:21.564 19:22:47 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 3387051 ']' 00:07:21.564 19:22:47 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 3387051 00:07:21.564 19:22:47 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:21.564 19:22:47 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:21.564 19:22:47 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3387051 00:07:21.564 19:22:47 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:21.564 19:22:47 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:21.564 19:22:47 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3387051' 00:07:21.564 killing process with pid 3387051 00:07:21.564 19:22:47 accel_rpc -- common/autotest_common.sh@965 -- # kill 3387051 00:07:21.564 19:22:47 accel_rpc -- common/autotest_common.sh@970 -- # wait 3387051 00:07:21.824 00:07:21.824 real 0m1.579s 00:07:21.824 user 0m1.696s 00:07:21.824 sys 0m0.465s 00:07:21.824 19:22:47 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.824 19:22:47 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.824 ************************************ 00:07:21.824 END TEST accel_rpc 00:07:21.824 ************************************ 00:07:21.824 19:22:47 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:21.824 19:22:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:21.824 19:22:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.824 19:22:47 -- common/autotest_common.sh@10 -- # set +x 00:07:21.824 ************************************ 00:07:21.824 START TEST app_cmdline 00:07:21.824 ************************************ 00:07:21.824 19:22:47 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:22.084 * Looking for test storage... 00:07:22.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:22.084 19:22:48 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:22.084 19:22:48 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3387448 00:07:22.084 19:22:48 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3387448 00:07:22.084 19:22:48 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 3387448 ']' 00:07:22.084 19:22:48 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.084 19:22:48 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:22.084 19:22:48 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.084 19:22:48 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:22.084 19:22:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:22.084 19:22:48 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:22.084 [2024-05-15 19:22:48.117176] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:07:22.084 [2024-05-15 19:22:48.117233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3387448 ] 00:07:22.084 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.084 [2024-05-15 19:22:48.200047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.084 [2024-05-15 19:22:48.265694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.026 19:22:48 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:23.026 19:22:48 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:23.026 19:22:48 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:23.026 { 00:07:23.026 "version": "SPDK v24.05-pre git sha1 7f5235167", 00:07:23.026 "fields": { 00:07:23.026 "major": 24, 00:07:23.026 "minor": 5, 00:07:23.026 "patch": 0, 00:07:23.026 "suffix": "-pre", 00:07:23.026 "commit": "7f5235167" 00:07:23.026 } 00:07:23.026 } 00:07:23.026 19:22:49 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:23.026 19:22:49 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:23.026 19:22:49 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:23.026 19:22:49 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:23.026 19:22:49 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:23.026 19:22:49 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:23.026 19:22:49 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:23.026 19:22:49 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.026 19:22:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:23.026 19:22:49 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.026 19:22:49 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:23.026 19:22:49 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:23.026 19:22:49 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:23.026 19:22:49 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:23.026 19:22:49 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:23.026 19:22:49 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.026 19:22:49 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.026 19:22:49 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.026 19:22:49 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.026 19:22:49 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.026 19:22:49 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.026 19:22:49 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.026 19:22:49 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:23.026 19:22:49 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:23.286 request: 00:07:23.286 { 00:07:23.286 "method": "env_dpdk_get_mem_stats", 00:07:23.286 "req_id": 1 00:07:23.286 } 00:07:23.286 Got JSON-RPC error response 00:07:23.286 response: 00:07:23.286 { 00:07:23.286 "code": -32601, 00:07:23.286 "message": "Method not found" 00:07:23.286 } 00:07:23.286 19:22:49 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:23.286 19:22:49 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:23.287 19:22:49 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:23.287 19:22:49 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:23.287 19:22:49 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3387448 00:07:23.287 19:22:49 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 3387448 ']' 00:07:23.287 19:22:49 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 3387448 00:07:23.287 19:22:49 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:23.287 19:22:49 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:23.287 19:22:49 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3387448 00:07:23.287 19:22:49 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:23.287 19:22:49 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:23.287 19:22:49 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3387448' 00:07:23.287 killing process with pid 3387448 00:07:23.287 19:22:49 app_cmdline -- common/autotest_common.sh@965 -- # kill 3387448 00:07:23.287 19:22:49 app_cmdline -- common/autotest_common.sh@970 -- # wait 3387448 00:07:23.547 00:07:23.547 real 0m1.675s 00:07:23.547 user 0m2.109s 00:07:23.547 sys 0m0.415s 00:07:23.547 19:22:49 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.547 19:22:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:23.547 ************************************ 00:07:23.547 END TEST app_cmdline 00:07:23.547 ************************************ 00:07:23.547 19:22:49 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:23.547 19:22:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:23.547 19:22:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.547 19:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:23.808 ************************************ 00:07:23.808 START TEST version 00:07:23.808 ************************************ 00:07:23.808 19:22:49 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:23.808 * Looking for test storage... 00:07:23.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:23.808 19:22:49 version -- app/version.sh@17 -- # get_header_version major 00:07:23.808 19:22:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:23.808 19:22:49 version -- app/version.sh@14 -- # cut -f2 00:07:23.808 19:22:49 version -- app/version.sh@14 -- # tr -d '"' 00:07:23.808 19:22:49 version -- app/version.sh@17 -- # major=24 00:07:23.808 19:22:49 version -- app/version.sh@18 -- # get_header_version minor 00:07:23.808 19:22:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:23.808 19:22:49 version -- app/version.sh@14 -- # cut -f2 00:07:23.808 19:22:49 version -- app/version.sh@14 -- # tr -d '"' 00:07:23.808 19:22:49 version -- app/version.sh@18 -- # minor=5 00:07:23.808 19:22:49 version -- app/version.sh@19 -- # get_header_version patch 00:07:23.809 19:22:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:23.809 19:22:49 version -- app/version.sh@14 -- # cut -f2 00:07:23.809 19:22:49 version -- app/version.sh@14 -- # tr -d '"' 00:07:23.809 19:22:49 version -- app/version.sh@19 -- # patch=0 00:07:23.809 19:22:49 version -- app/version.sh@20 -- # get_header_version suffix 00:07:23.809 19:22:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:23.809 19:22:49 version -- app/version.sh@14 -- # cut -f2 00:07:23.809 19:22:49 version -- app/version.sh@14 -- # tr -d '"' 00:07:23.809 19:22:49 version -- app/version.sh@20 -- # suffix=-pre 00:07:23.809 19:22:49 version -- app/version.sh@22 -- # version=24.5 00:07:23.809 19:22:49 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:23.809 19:22:49 version -- app/version.sh@28 -- # version=24.5rc0 00:07:23.809 19:22:49 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:23.809 19:22:49 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:23.809 19:22:49 version -- app/version.sh@30 -- # py_version=24.5rc0 00:07:23.809 19:22:49 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:07:23.809 00:07:23.809 real 0m0.166s 00:07:23.809 user 0m0.080s 00:07:23.809 sys 0m0.122s 00:07:23.809 19:22:49 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.809 19:22:49 version -- common/autotest_common.sh@10 -- # set +x 00:07:23.809 ************************************ 00:07:23.809 END TEST version 00:07:23.809 ************************************ 00:07:23.809 19:22:49 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:07:23.809 19:22:49 -- spdk/autotest.sh@194 -- # uname -s 00:07:23.809 19:22:49 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:23.809 19:22:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:23.809 19:22:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:23.809 19:22:49 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:23.809 19:22:49 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:23.809 19:22:49 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:23.809 19:22:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:23.809 19:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:24.070 19:22:50 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:24.070 19:22:50 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:24.070 19:22:50 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:07:24.070 19:22:50 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:07:24.070 19:22:50 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:07:24.070 19:22:50 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:07:24.070 19:22:50 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:24.070 19:22:50 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:24.070 19:22:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.070 19:22:50 -- common/autotest_common.sh@10 -- # set +x 00:07:24.070 ************************************ 00:07:24.070 START TEST nvmf_tcp 00:07:24.070 ************************************ 00:07:24.070 19:22:50 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:24.070 * Looking for test storage... 00:07:24.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:24.070 19:22:50 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:24.070 19:22:50 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:24.070 19:22:50 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.070 19:22:50 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:24.070 19:22:50 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.070 19:22:50 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.070 19:22:50 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.070 19:22:50 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.070 19:22:50 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.070 19:22:50 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.070 19:22:50 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.070 19:22:50 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.070 19:22:50 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.070 19:22:50 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.070 19:22:50 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:24.070 19:22:50 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:24.070 19:22:50 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.070 19:22:50 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.070 19:22:50 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.070 19:22:50 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.070 19:22:50 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:24.071 19:22:50 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.071 19:22:50 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.071 19:22:50 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.071 19:22:50 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.071 19:22:50 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.071 19:22:50 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.071 19:22:50 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:24.071 19:22:50 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.071 19:22:50 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:24.071 19:22:50 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:24.071 19:22:50 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:24.071 19:22:50 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.071 19:22:50 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.071 19:22:50 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.071 19:22:50 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:24.071 19:22:50 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:24.071 19:22:50 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:24.071 19:22:50 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:24.071 19:22:50 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:24.071 19:22:50 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:24.071 19:22:50 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:24.071 19:22:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.071 19:22:50 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:24.071 19:22:50 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:24.071 19:22:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:24.071 19:22:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.071 19:22:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.071 ************************************ 00:07:24.071 START TEST nvmf_example 00:07:24.071 ************************************ 00:07:24.071 19:22:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:24.333 * Looking for test storage... 00:07:24.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:24.333 19:22:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.473 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:32.473 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:32.473 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:32.473 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:32.473 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:32.473 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:32.473 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:32.473 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:32.473 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:32.473 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:32.473 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:32.473 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:32.474 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:32.474 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:32.474 Found net devices under 0000:31:00.0: cvl_0_0 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:32.474 Found net devices under 0000:31:00.1: cvl_0_1 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:32.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:07:32.474 00:07:32.474 --- 10.0.0.2 ping statistics --- 00:07:32.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.474 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:07:32.474 00:07:32.474 --- 10.0.0.1 ping statistics --- 00:07:32.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.474 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3392218 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3392218 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 3392218 ']' 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:32.474 19:22:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.474 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:33.415 19:22:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:33.415 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.649 Initializing NVMe Controllers 00:07:45.649 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:45.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:45.649 Initialization complete. Launching workers. 00:07:45.649 ======================================================== 00:07:45.649 Latency(us) 00:07:45.649 Device Information : IOPS MiB/s Average min max 00:07:45.649 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16717.72 65.30 3827.97 817.65 15920.66 00:07:45.649 ======================================================== 00:07:45.649 Total : 16717.72 65.30 3827.97 817.65 15920.66 00:07:45.649 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:45.649 rmmod nvme_tcp 00:07:45.649 rmmod nvme_fabrics 00:07:45.649 rmmod nvme_keyring 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3392218 ']' 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3392218 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 3392218 ']' 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 3392218 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3392218 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3392218' 00:07:45.649 killing process with pid 3392218 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 3392218 00:07:45.649 19:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 3392218 00:07:45.649 nvmf threads initialize successfully 00:07:45.649 bdev subsystem init successfully 00:07:45.649 created a nvmf target service 00:07:45.649 create targets's poll groups done 00:07:45.649 all subsystems of target started 00:07:45.649 nvmf target is running 00:07:45.649 all subsystems of target stopped 00:07:45.649 destroy targets's poll groups done 00:07:45.649 destroyed the nvmf target service 00:07:45.649 bdev subsystem finish successfully 00:07:45.649 nvmf threads destroy successfully 00:07:45.649 19:23:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:45.649 19:23:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:45.649 19:23:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:45.649 19:23:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:45.649 19:23:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:45.649 19:23:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.649 19:23:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.649 19:23:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.223 19:23:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:46.223 19:23:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:46.223 19:23:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.223 19:23:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.223 00:07:46.223 real 0m21.913s 00:07:46.223 user 0m47.347s 00:07:46.223 sys 0m7.060s 00:07:46.223 19:23:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.223 19:23:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.223 ************************************ 00:07:46.223 END TEST nvmf_example 00:07:46.223 ************************************ 00:07:46.223 19:23:12 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:46.223 19:23:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:46.223 19:23:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.223 19:23:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:46.223 ************************************ 00:07:46.223 START TEST nvmf_filesystem 00:07:46.223 ************************************ 00:07:46.223 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:46.223 * Looking for test storage... 00:07:46.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.223 19:23:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:46.223 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:46.223 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:46.223 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:46.224 19:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:46.224 #define SPDK_CONFIG_H 00:07:46.224 #define SPDK_CONFIG_APPS 1 00:07:46.224 #define SPDK_CONFIG_ARCH native 00:07:46.224 #undef SPDK_CONFIG_ASAN 00:07:46.224 #undef SPDK_CONFIG_AVAHI 00:07:46.224 #undef SPDK_CONFIG_CET 00:07:46.224 #define SPDK_CONFIG_COVERAGE 1 00:07:46.224 #define SPDK_CONFIG_CROSS_PREFIX 00:07:46.224 #undef SPDK_CONFIG_CRYPTO 00:07:46.224 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:46.224 #undef SPDK_CONFIG_CUSTOMOCF 00:07:46.224 #undef SPDK_CONFIG_DAOS 00:07:46.224 #define SPDK_CONFIG_DAOS_DIR 00:07:46.224 #define SPDK_CONFIG_DEBUG 1 00:07:46.224 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:46.224 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:46.224 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:46.224 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:46.224 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:46.224 #undef SPDK_CONFIG_DPDK_UADK 00:07:46.224 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:46.224 #define SPDK_CONFIG_EXAMPLES 1 00:07:46.224 #undef SPDK_CONFIG_FC 00:07:46.224 #define SPDK_CONFIG_FC_PATH 00:07:46.224 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:46.224 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:46.224 #undef SPDK_CONFIG_FUSE 00:07:46.224 #undef SPDK_CONFIG_FUZZER 00:07:46.224 #define SPDK_CONFIG_FUZZER_LIB 00:07:46.224 #undef SPDK_CONFIG_GOLANG 00:07:46.224 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:46.224 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:46.224 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:46.224 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:46.225 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:46.225 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:46.225 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:46.225 #define SPDK_CONFIG_IDXD 1 00:07:46.225 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:46.225 #undef SPDK_CONFIG_IPSEC_MB 00:07:46.225 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:46.225 #define SPDK_CONFIG_ISAL 1 00:07:46.225 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:46.225 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:46.225 #define SPDK_CONFIG_LIBDIR 00:07:46.225 #undef SPDK_CONFIG_LTO 00:07:46.225 #define SPDK_CONFIG_MAX_LCORES 00:07:46.225 #define SPDK_CONFIG_NVME_CUSE 1 00:07:46.225 #undef SPDK_CONFIG_OCF 00:07:46.225 #define SPDK_CONFIG_OCF_PATH 00:07:46.225 #define SPDK_CONFIG_OPENSSL_PATH 00:07:46.225 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:46.225 #define SPDK_CONFIG_PGO_DIR 00:07:46.225 #undef SPDK_CONFIG_PGO_USE 00:07:46.225 #define SPDK_CONFIG_PREFIX /usr/local 00:07:46.225 #undef SPDK_CONFIG_RAID5F 00:07:46.225 #undef SPDK_CONFIG_RBD 00:07:46.225 #define SPDK_CONFIG_RDMA 1 00:07:46.225 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:46.225 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:46.225 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:46.225 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:46.225 #define SPDK_CONFIG_SHARED 1 00:07:46.225 #undef SPDK_CONFIG_SMA 00:07:46.225 #define SPDK_CONFIG_TESTS 1 00:07:46.225 #undef SPDK_CONFIG_TSAN 00:07:46.225 #define SPDK_CONFIG_UBLK 1 00:07:46.225 #define SPDK_CONFIG_UBSAN 1 00:07:46.225 #undef SPDK_CONFIG_UNIT_TESTS 00:07:46.225 #undef SPDK_CONFIG_URING 00:07:46.225 #define SPDK_CONFIG_URING_PATH 00:07:46.225 #undef SPDK_CONFIG_URING_ZNS 00:07:46.225 #undef SPDK_CONFIG_USDT 00:07:46.225 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:46.225 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:46.225 #define SPDK_CONFIG_VFIO_USER 1 00:07:46.225 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:46.225 #define SPDK_CONFIG_VHOST 1 00:07:46.225 #define SPDK_CONFIG_VIRTIO 1 00:07:46.225 #undef SPDK_CONFIG_VTUNE 00:07:46.225 #define SPDK_CONFIG_VTUNE_DIR 00:07:46.225 #define SPDK_CONFIG_WERROR 1 00:07:46.225 #define SPDK_CONFIG_WPDK_DIR 00:07:46.225 #undef SPDK_CONFIG_XNVME 00:07:46.225 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:46.225 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:46.226 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j144 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 3395018 ]] 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 3395018 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.rLuPw2 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.rLuPw2/tests/target /tmp/spdk.rLuPw2 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=968249344 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4316180480 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=119708954624 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=129371009024 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9662054400 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=64629882880 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=64685502464 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=55619584 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=25864224768 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=25874202624 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9977856 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=efivarfs 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=efivarfs 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=189440 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=507904 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=314368 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=64683655168 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=64685506560 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=1851392 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:46.489 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12937093120 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12937097216 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:46.490 * Looking for test storage... 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=119708954624 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=11876646912 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:46.490 19:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:54.631 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:54.631 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:54.631 Found net devices under 0000:31:00.0: cvl_0_0 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:54.631 Found net devices under 0000:31:00.1: cvl_0_1 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:54.631 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:54.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:07:54.632 00:07:54.632 --- 10.0.0.2 ping statistics --- 00:07:54.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.632 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:54.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:07:54.632 00:07:54.632 --- 10.0.0.1 ping statistics --- 00:07:54.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.632 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:54.632 ************************************ 00:07:54.632 START TEST nvmf_filesystem_no_in_capsule 00:07:54.632 ************************************ 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3399316 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3399316 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3399316 ']' 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:54.632 19:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.632 [2024-05-15 19:23:20.641176] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:07:54.632 [2024-05-15 19:23:20.641233] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.632 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.632 [2024-05-15 19:23:20.736669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.893 [2024-05-15 19:23:20.835899] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.893 [2024-05-15 19:23:20.835958] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.893 [2024-05-15 19:23:20.835966] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.893 [2024-05-15 19:23:20.835973] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.893 [2024-05-15 19:23:20.835979] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.893 [2024-05-15 19:23:20.836122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.893 [2024-05-15 19:23:20.836254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.893 [2024-05-15 19:23:20.836421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.893 [2024-05-15 19:23:20.836434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.523 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:55.524 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:55.524 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:55.524 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:55.524 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.524 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.524 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:55.524 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:55.524 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.524 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.524 [2024-05-15 19:23:21.572149] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.524 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.524 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:55.524 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.524 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.524 Malloc1 00:07:55.524 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.524 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:55.524 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.524 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.524 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.524 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:55.524 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.524 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.789 [2024-05-15 19:23:21.702349] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:55.789 [2024-05-15 19:23:21.702595] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:55.789 { 00:07:55.789 "name": "Malloc1", 00:07:55.789 "aliases": [ 00:07:55.789 "1874061a-f5d9-43ee-9e45-41da59124ebc" 00:07:55.789 ], 00:07:55.789 "product_name": "Malloc disk", 00:07:55.789 "block_size": 512, 00:07:55.789 "num_blocks": 1048576, 00:07:55.789 "uuid": "1874061a-f5d9-43ee-9e45-41da59124ebc", 00:07:55.789 "assigned_rate_limits": { 00:07:55.789 "rw_ios_per_sec": 0, 00:07:55.789 "rw_mbytes_per_sec": 0, 00:07:55.789 "r_mbytes_per_sec": 0, 00:07:55.789 "w_mbytes_per_sec": 0 00:07:55.789 }, 00:07:55.789 "claimed": true, 00:07:55.789 "claim_type": "exclusive_write", 00:07:55.789 "zoned": false, 00:07:55.789 "supported_io_types": { 00:07:55.789 "read": true, 00:07:55.789 "write": true, 00:07:55.789 "unmap": true, 00:07:55.789 "write_zeroes": true, 00:07:55.789 "flush": true, 00:07:55.789 "reset": true, 00:07:55.789 "compare": false, 00:07:55.789 "compare_and_write": false, 00:07:55.789 "abort": true, 00:07:55.789 "nvme_admin": false, 00:07:55.789 "nvme_io": false 00:07:55.789 }, 00:07:55.789 "memory_domains": [ 00:07:55.789 { 00:07:55.789 "dma_device_id": "system", 00:07:55.789 "dma_device_type": 1 00:07:55.789 }, 00:07:55.789 { 00:07:55.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.789 "dma_device_type": 2 00:07:55.789 } 00:07:55.789 ], 00:07:55.789 "driver_specific": {} 00:07:55.789 } 00:07:55.789 ]' 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:55.789 19:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:57.174 19:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:57.174 19:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:57.174 19:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:57.174 19:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:57.174 19:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:59.720 19:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:59.720 19:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:59.720 19:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:59.720 19:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:59.720 19:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:59.720 19:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:59.720 19:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:59.720 19:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:59.720 19:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:59.720 19:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:59.720 19:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:59.720 19:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:59.720 19:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:59.720 19:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:59.720 19:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:59.720 19:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:59.720 19:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:59.720 19:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:00.290 19:23:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:01.232 19:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:01.232 19:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:01.232 19:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:01.232 19:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:01.232 19:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.492 ************************************ 00:08:01.492 START TEST filesystem_ext4 00:08:01.492 ************************************ 00:08:01.492 19:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:01.492 19:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:01.492 19:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:01.492 19:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:01.492 19:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:01.492 19:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:01.492 19:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:01.492 19:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:01.492 19:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:01.492 19:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:01.492 19:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:01.492 mke2fs 1.46.5 (30-Dec-2021) 00:08:01.492 Discarding device blocks: 0/522240 done 00:08:01.492 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:01.492 Filesystem UUID: f0dce971-daa2-4c67-846d-02b974dde5ef 00:08:01.492 Superblock backups stored on blocks: 00:08:01.492 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:01.492 00:08:01.492 Allocating group tables: 0/64 done 00:08:01.492 Writing inode tables: 0/64 done 00:08:02.062 Creating journal (8192 blocks): done 00:08:02.323 Writing superblocks and filesystem accounting information: 0/64 done 00:08:02.323 00:08:02.323 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:02.323 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:02.583 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:02.583 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:02.583 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:02.583 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:02.583 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:02.583 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3399316 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:02.845 00:08:02.845 real 0m1.347s 00:08:02.845 user 0m0.023s 00:08:02.845 sys 0m0.076s 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:02.845 ************************************ 00:08:02.845 END TEST filesystem_ext4 00:08:02.845 ************************************ 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.845 ************************************ 00:08:02.845 START TEST filesystem_btrfs 00:08:02.845 ************************************ 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:02.845 19:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:03.416 btrfs-progs v6.6.2 00:08:03.416 See https://btrfs.readthedocs.io for more information. 00:08:03.416 00:08:03.416 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:03.416 NOTE: several default settings have changed in version 5.15, please make sure 00:08:03.416 this does not affect your deployments: 00:08:03.416 - DUP for metadata (-m dup) 00:08:03.416 - enabled no-holes (-O no-holes) 00:08:03.416 - enabled free-space-tree (-R free-space-tree) 00:08:03.416 00:08:03.416 Label: (null) 00:08:03.416 UUID: a3a27850-d902-4563-a325-0334950af91c 00:08:03.416 Node size: 16384 00:08:03.416 Sector size: 4096 00:08:03.416 Filesystem size: 510.00MiB 00:08:03.416 Block group profiles: 00:08:03.416 Data: single 8.00MiB 00:08:03.416 Metadata: DUP 32.00MiB 00:08:03.416 System: DUP 8.00MiB 00:08:03.416 SSD detected: yes 00:08:03.417 Zoned device: no 00:08:03.417 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:03.417 Runtime features: free-space-tree 00:08:03.417 Checksum: crc32c 00:08:03.417 Number of devices: 1 00:08:03.417 Devices: 00:08:03.417 ID SIZE PATH 00:08:03.417 1 510.00MiB /dev/nvme0n1p1 00:08:03.417 00:08:03.417 19:23:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:03.417 19:23:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:03.988 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:03.988 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:03.988 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:03.988 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:03.988 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:03.988 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:03.988 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3399316 00:08:03.988 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:03.988 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:03.988 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:03.988 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:03.988 00:08:03.988 real 0m1.277s 00:08:03.988 user 0m0.028s 00:08:03.988 sys 0m0.138s 00:08:03.988 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:03.988 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:03.988 ************************************ 00:08:03.988 END TEST filesystem_btrfs 00:08:03.988 ************************************ 00:08:04.248 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:04.249 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:04.249 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.249 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.249 ************************************ 00:08:04.249 START TEST filesystem_xfs 00:08:04.249 ************************************ 00:08:04.249 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:04.249 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:04.249 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:04.249 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:04.249 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:04.249 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:04.249 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:04.249 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:08:04.249 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:04.249 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:04.249 19:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:04.249 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:04.249 = sectsz=512 attr=2, projid32bit=1 00:08:04.249 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:04.249 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:04.249 data = bsize=4096 blocks=130560, imaxpct=25 00:08:04.249 = sunit=0 swidth=0 blks 00:08:04.249 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:04.249 log =internal log bsize=4096 blocks=16384, version=2 00:08:04.249 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:04.249 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:05.191 Discarding blocks...Done. 00:08:05.191 19:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:05.191 19:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:07.101 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:07.101 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:07.101 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:07.101 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:07.101 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:07.101 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:07.101 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3399316 00:08:07.101 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:07.101 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:07.101 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:07.101 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:07.101 00:08:07.102 real 0m3.025s 00:08:07.102 user 0m0.027s 00:08:07.102 sys 0m0.080s 00:08:07.102 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.102 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:07.102 ************************************ 00:08:07.102 END TEST filesystem_xfs 00:08:07.102 ************************************ 00:08:07.361 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:07.361 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:07.361 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:07.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.361 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:07.361 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:07.361 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:07.361 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:07.361 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:07.361 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:07.361 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:07.361 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.361 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.361 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.361 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.361 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:07.361 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3399316 00:08:07.361 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3399316 ']' 00:08:07.361 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3399316 00:08:07.361 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:07.361 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:07.361 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3399316 00:08:07.621 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:07.621 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:07.621 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3399316' 00:08:07.621 killing process with pid 3399316 00:08:07.621 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 3399316 00:08:07.621 [2024-05-15 19:23:33.569723] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:07.621 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 3399316 00:08:07.621 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:07.621 00:08:07.621 real 0m13.225s 00:08:07.621 user 0m52.007s 00:08:07.621 sys 0m1.307s 00:08:07.621 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.621 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.621 ************************************ 00:08:07.621 END TEST nvmf_filesystem_no_in_capsule 00:08:07.621 ************************************ 00:08:07.882 19:23:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:07.882 19:23:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:07.882 19:23:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:07.882 19:23:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.882 ************************************ 00:08:07.882 START TEST nvmf_filesystem_in_capsule 00:08:07.882 ************************************ 00:08:07.882 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:08:07.882 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:07.882 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:07.882 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:07.882 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:07.882 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.882 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3402196 00:08:07.882 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3402196 00:08:07.882 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3402196 ']' 00:08:07.882 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:07.882 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.882 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:07.882 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.882 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:07.882 19:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.882 [2024-05-15 19:23:33.950767] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:08:07.882 [2024-05-15 19:23:33.950815] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.882 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.882 [2024-05-15 19:23:34.041927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:08.143 [2024-05-15 19:23:34.108570] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.143 [2024-05-15 19:23:34.108604] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.143 [2024-05-15 19:23:34.108611] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.143 [2024-05-15 19:23:34.108617] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.143 [2024-05-15 19:23:34.108623] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.143 [2024-05-15 19:23:34.108724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.143 [2024-05-15 19:23:34.108841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.143 [2024-05-15 19:23:34.108999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.144 [2024-05-15 19:23:34.109000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:08.714 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:08.714 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:08.714 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:08.714 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:08.714 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.714 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.714 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:08.714 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:08.714 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.714 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.714 [2024-05-15 19:23:34.870180] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.714 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.714 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:08.714 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.714 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.974 Malloc1 00:08:08.974 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.974 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:08.974 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.974 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.974 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.974 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:08.974 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.974 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.974 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.974 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.974 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.974 19:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.974 [2024-05-15 19:23:35.003368] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:08.974 [2024-05-15 19:23:35.003611] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.974 19:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.974 19:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:08.975 19:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:08.975 19:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:08.975 19:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:08.975 19:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:08.975 19:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:08.975 19:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.975 19:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.975 19:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.975 19:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:08.975 { 00:08:08.975 "name": "Malloc1", 00:08:08.975 "aliases": [ 00:08:08.975 "3164f77e-e9dd-4d8e-aa04-a39d6adf86a4" 00:08:08.975 ], 00:08:08.975 "product_name": "Malloc disk", 00:08:08.975 "block_size": 512, 00:08:08.975 "num_blocks": 1048576, 00:08:08.975 "uuid": "3164f77e-e9dd-4d8e-aa04-a39d6adf86a4", 00:08:08.975 "assigned_rate_limits": { 00:08:08.975 "rw_ios_per_sec": 0, 00:08:08.975 "rw_mbytes_per_sec": 0, 00:08:08.975 "r_mbytes_per_sec": 0, 00:08:08.975 "w_mbytes_per_sec": 0 00:08:08.975 }, 00:08:08.975 "claimed": true, 00:08:08.975 "claim_type": "exclusive_write", 00:08:08.975 "zoned": false, 00:08:08.975 "supported_io_types": { 00:08:08.975 "read": true, 00:08:08.975 "write": true, 00:08:08.975 "unmap": true, 00:08:08.975 "write_zeroes": true, 00:08:08.975 "flush": true, 00:08:08.975 "reset": true, 00:08:08.975 "compare": false, 00:08:08.975 "compare_and_write": false, 00:08:08.975 "abort": true, 00:08:08.975 "nvme_admin": false, 00:08:08.975 "nvme_io": false 00:08:08.975 }, 00:08:08.975 "memory_domains": [ 00:08:08.975 { 00:08:08.975 "dma_device_id": "system", 00:08:08.975 "dma_device_type": 1 00:08:08.975 }, 00:08:08.975 { 00:08:08.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.975 "dma_device_type": 2 00:08:08.975 } 00:08:08.975 ], 00:08:08.975 "driver_specific": {} 00:08:08.975 } 00:08:08.975 ]' 00:08:08.975 19:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:08.975 19:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:08.975 19:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:08.975 19:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:08.975 19:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:08.975 19:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:08.975 19:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:08.975 19:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:10.886 19:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:10.886 19:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:10.886 19:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:10.886 19:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:10.886 19:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:12.798 19:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:12.798 19:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:12.798 19:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:12.798 19:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:12.798 19:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:12.798 19:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:12.798 19:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:12.798 19:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:12.798 19:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:12.798 19:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:12.798 19:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:12.798 19:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:12.798 19:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:12.798 19:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:12.798 19:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:12.798 19:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:12.799 19:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:13.059 19:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:13.319 19:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:14.260 19:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:14.260 19:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:14.260 19:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:14.260 19:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:14.260 19:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.260 ************************************ 00:08:14.260 START TEST filesystem_in_capsule_ext4 00:08:14.260 ************************************ 00:08:14.260 19:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:14.260 19:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:14.260 19:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:14.261 19:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:14.261 19:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:14.261 19:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:14.261 19:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:14.261 19:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:14.261 19:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:14.261 19:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:14.261 19:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:14.261 mke2fs 1.46.5 (30-Dec-2021) 00:08:14.261 Discarding device blocks: 0/522240 done 00:08:14.521 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:14.521 Filesystem UUID: e2e5f8c4-7004-445c-bbbe-696529ee5548 00:08:14.521 Superblock backups stored on blocks: 00:08:14.521 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:14.521 00:08:14.521 Allocating group tables: 0/64 done 00:08:14.521 Writing inode tables: 0/64 done 00:08:17.818 Creating journal (8192 blocks): done 00:08:17.818 Writing superblocks and filesystem accounting information: 0/64 done 00:08:17.818 00:08:17.818 19:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:17.818 19:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:17.818 19:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3402196 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:18.079 00:08:18.079 real 0m3.710s 00:08:18.079 user 0m0.032s 00:08:18.079 sys 0m0.071s 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:18.079 ************************************ 00:08:18.079 END TEST filesystem_in_capsule_ext4 00:08:18.079 ************************************ 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.079 ************************************ 00:08:18.079 START TEST filesystem_in_capsule_btrfs 00:08:18.079 ************************************ 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:18.079 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:18.080 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:18.080 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:18.080 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:18.080 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:18.649 btrfs-progs v6.6.2 00:08:18.649 See https://btrfs.readthedocs.io for more information. 00:08:18.649 00:08:18.649 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:18.649 NOTE: several default settings have changed in version 5.15, please make sure 00:08:18.649 this does not affect your deployments: 00:08:18.649 - DUP for metadata (-m dup) 00:08:18.649 - enabled no-holes (-O no-holes) 00:08:18.649 - enabled free-space-tree (-R free-space-tree) 00:08:18.649 00:08:18.649 Label: (null) 00:08:18.649 UUID: ba32b30e-9113-4cb6-a851-873ac6447b52 00:08:18.649 Node size: 16384 00:08:18.649 Sector size: 4096 00:08:18.649 Filesystem size: 510.00MiB 00:08:18.649 Block group profiles: 00:08:18.649 Data: single 8.00MiB 00:08:18.649 Metadata: DUP 32.00MiB 00:08:18.649 System: DUP 8.00MiB 00:08:18.649 SSD detected: yes 00:08:18.649 Zoned device: no 00:08:18.649 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:18.649 Runtime features: free-space-tree 00:08:18.649 Checksum: crc32c 00:08:18.649 Number of devices: 1 00:08:18.649 Devices: 00:08:18.649 ID SIZE PATH 00:08:18.649 1 510.00MiB /dev/nvme0n1p1 00:08:18.649 00:08:18.649 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:18.649 19:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:18.909 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:18.909 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:18.909 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:18.909 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:18.909 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:18.909 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3402196 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:19.169 00:08:19.169 real 0m0.951s 00:08:19.169 user 0m0.025s 00:08:19.169 sys 0m0.140s 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:19.169 ************************************ 00:08:19.169 END TEST filesystem_in_capsule_btrfs 00:08:19.169 ************************************ 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:19.169 ************************************ 00:08:19.169 START TEST filesystem_in_capsule_xfs 00:08:19.169 ************************************ 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:19.169 19:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:19.169 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:19.169 = sectsz=512 attr=2, projid32bit=1 00:08:19.169 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:19.169 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:19.169 data = bsize=4096 blocks=130560, imaxpct=25 00:08:19.169 = sunit=0 swidth=0 blks 00:08:19.169 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:19.169 log =internal log bsize=4096 blocks=16384, version=2 00:08:19.169 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:19.169 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:20.110 Discarding blocks...Done. 00:08:20.110 19:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:20.110 19:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:22.652 19:23:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:22.652 19:23:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:22.652 19:23:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:22.652 19:23:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:22.652 19:23:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:22.652 19:23:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:22.652 19:23:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3402196 00:08:22.652 19:23:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:22.652 19:23:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:22.652 19:23:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:22.652 19:23:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:22.652 00:08:22.652 real 0m3.340s 00:08:22.652 user 0m0.027s 00:08:22.652 sys 0m0.077s 00:08:22.652 19:23:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:22.652 19:23:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:22.652 ************************************ 00:08:22.652 END TEST filesystem_in_capsule_xfs 00:08:22.652 ************************************ 00:08:22.652 19:23:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:22.912 19:23:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:22.912 19:23:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:22.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.912 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:22.912 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:22.912 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:22.912 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.912 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:22.912 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.912 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:22.912 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.912 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.912 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.912 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.912 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:22.912 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3402196 00:08:22.912 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3402196 ']' 00:08:22.912 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3402196 00:08:22.912 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:22.912 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:22.912 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3402196 00:08:23.172 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:23.172 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:23.172 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3402196' 00:08:23.172 killing process with pid 3402196 00:08:23.172 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 3402196 00:08:23.172 [2024-05-15 19:23:49.100395] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:23.172 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 3402196 00:08:23.172 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:23.172 00:08:23.172 real 0m15.445s 00:08:23.172 user 1m0.974s 00:08:23.172 sys 0m1.315s 00:08:23.172 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:23.172 19:23:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:23.172 ************************************ 00:08:23.172 END TEST nvmf_filesystem_in_capsule 00:08:23.172 ************************************ 00:08:23.433 19:23:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:23.433 19:23:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:23.433 19:23:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:23.433 19:23:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:23.433 19:23:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:23.433 19:23:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:23.433 19:23:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:23.433 rmmod nvme_tcp 00:08:23.433 rmmod nvme_fabrics 00:08:23.433 rmmod nvme_keyring 00:08:23.433 19:23:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:23.433 19:23:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:23.433 19:23:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:23.433 19:23:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:23.433 19:23:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:23.433 19:23:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:23.433 19:23:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:23.433 19:23:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:23.433 19:23:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:23.433 19:23:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.433 19:23:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.433 19:23:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.348 19:23:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:25.348 00:08:25.348 real 0m39.282s 00:08:25.348 user 1m55.234s 00:08:25.348 sys 0m8.855s 00:08:25.348 19:23:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:25.348 19:23:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.348 ************************************ 00:08:25.348 END TEST nvmf_filesystem 00:08:25.348 ************************************ 00:08:25.609 19:23:51 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:25.609 19:23:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:25.609 19:23:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:25.609 19:23:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:25.609 ************************************ 00:08:25.609 START TEST nvmf_target_discovery 00:08:25.609 ************************************ 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:25.609 * Looking for test storage... 00:08:25.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:25.609 19:23:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:33.812 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:33.813 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:33.813 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:33.813 Found net devices under 0000:31:00.0: cvl_0_0 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:33.813 Found net devices under 0000:31:00.1: cvl_0_1 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:33.813 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.075 19:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:34.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:08:34.075 00:08:34.075 --- 10.0.0.2 ping statistics --- 00:08:34.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.075 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:08:34.075 00:08:34.075 --- 10.0.0.1 ping statistics --- 00:08:34.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.075 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3410177 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3410177 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 3410177 ']' 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:34.075 19:24:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.336 [2024-05-15 19:24:00.269391] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:08:34.336 [2024-05-15 19:24:00.269456] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.336 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.336 [2024-05-15 19:24:00.368653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.336 [2024-05-15 19:24:00.466879] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.336 [2024-05-15 19:24:00.466946] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.336 [2024-05-15 19:24:00.466956] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.336 [2024-05-15 19:24:00.466962] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.336 [2024-05-15 19:24:00.466968] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.336 [2024-05-15 19:24:00.467103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.336 [2024-05-15 19:24:00.467234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.336 [2024-05-15 19:24:00.467401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.336 [2024-05-15 19:24:00.467402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.277 [2024-05-15 19:24:01.147945] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.277 Null1 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.277 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.278 [2024-05-15 19:24:01.208077] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:35.278 [2024-05-15 19:24:01.208330] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.278 Null2 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.278 Null3 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.278 Null4 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:08:35.278 00:08:35.278 Discovery Log Number of Records 6, Generation counter 6 00:08:35.278 =====Discovery Log Entry 0====== 00:08:35.278 trtype: tcp 00:08:35.278 adrfam: ipv4 00:08:35.278 subtype: current discovery subsystem 00:08:35.278 treq: not required 00:08:35.278 portid: 0 00:08:35.278 trsvcid: 4420 00:08:35.278 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:35.278 traddr: 10.0.0.2 00:08:35.278 eflags: explicit discovery connections, duplicate discovery information 00:08:35.278 sectype: none 00:08:35.278 =====Discovery Log Entry 1====== 00:08:35.278 trtype: tcp 00:08:35.278 adrfam: ipv4 00:08:35.278 subtype: nvme subsystem 00:08:35.278 treq: not required 00:08:35.278 portid: 0 00:08:35.278 trsvcid: 4420 00:08:35.278 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:35.278 traddr: 10.0.0.2 00:08:35.278 eflags: none 00:08:35.278 sectype: none 00:08:35.278 =====Discovery Log Entry 2====== 00:08:35.278 trtype: tcp 00:08:35.278 adrfam: ipv4 00:08:35.278 subtype: nvme subsystem 00:08:35.278 treq: not required 00:08:35.278 portid: 0 00:08:35.278 trsvcid: 4420 00:08:35.278 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:35.278 traddr: 10.0.0.2 00:08:35.278 eflags: none 00:08:35.278 sectype: none 00:08:35.278 =====Discovery Log Entry 3====== 00:08:35.278 trtype: tcp 00:08:35.278 adrfam: ipv4 00:08:35.278 subtype: nvme subsystem 00:08:35.278 treq: not required 00:08:35.278 portid: 0 00:08:35.278 trsvcid: 4420 00:08:35.278 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:35.278 traddr: 10.0.0.2 00:08:35.278 eflags: none 00:08:35.278 sectype: none 00:08:35.278 =====Discovery Log Entry 4====== 00:08:35.278 trtype: tcp 00:08:35.278 adrfam: ipv4 00:08:35.278 subtype: nvme subsystem 00:08:35.278 treq: not required 00:08:35.278 portid: 0 00:08:35.278 trsvcid: 4420 00:08:35.278 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:35.278 traddr: 10.0.0.2 00:08:35.278 eflags: none 00:08:35.278 sectype: none 00:08:35.278 =====Discovery Log Entry 5====== 00:08:35.278 trtype: tcp 00:08:35.278 adrfam: ipv4 00:08:35.278 subtype: discovery subsystem referral 00:08:35.278 treq: not required 00:08:35.278 portid: 0 00:08:35.278 trsvcid: 4430 00:08:35.278 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:35.278 traddr: 10.0.0.2 00:08:35.278 eflags: none 00:08:35.278 sectype: none 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:35.278 Perform nvmf subsystem discovery via RPC 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.278 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.540 [ 00:08:35.540 { 00:08:35.540 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:35.540 "subtype": "Discovery", 00:08:35.540 "listen_addresses": [ 00:08:35.540 { 00:08:35.540 "trtype": "TCP", 00:08:35.540 "adrfam": "IPv4", 00:08:35.540 "traddr": "10.0.0.2", 00:08:35.540 "trsvcid": "4420" 00:08:35.540 } 00:08:35.540 ], 00:08:35.540 "allow_any_host": true, 00:08:35.540 "hosts": [] 00:08:35.540 }, 00:08:35.540 { 00:08:35.540 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.540 "subtype": "NVMe", 00:08:35.540 "listen_addresses": [ 00:08:35.540 { 00:08:35.540 "trtype": "TCP", 00:08:35.540 "adrfam": "IPv4", 00:08:35.540 "traddr": "10.0.0.2", 00:08:35.540 "trsvcid": "4420" 00:08:35.540 } 00:08:35.540 ], 00:08:35.540 "allow_any_host": true, 00:08:35.540 "hosts": [], 00:08:35.540 "serial_number": "SPDK00000000000001", 00:08:35.540 "model_number": "SPDK bdev Controller", 00:08:35.540 "max_namespaces": 32, 00:08:35.540 "min_cntlid": 1, 00:08:35.540 "max_cntlid": 65519, 00:08:35.540 "namespaces": [ 00:08:35.540 { 00:08:35.540 "nsid": 1, 00:08:35.540 "bdev_name": "Null1", 00:08:35.540 "name": "Null1", 00:08:35.540 "nguid": "F5B20A6D04A743E9878CAA0BF6823EF1", 00:08:35.540 "uuid": "f5b20a6d-04a7-43e9-878c-aa0bf6823ef1" 00:08:35.540 } 00:08:35.540 ] 00:08:35.540 }, 00:08:35.540 { 00:08:35.540 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:35.540 "subtype": "NVMe", 00:08:35.540 "listen_addresses": [ 00:08:35.540 { 00:08:35.540 "trtype": "TCP", 00:08:35.540 "adrfam": "IPv4", 00:08:35.540 "traddr": "10.0.0.2", 00:08:35.540 "trsvcid": "4420" 00:08:35.540 } 00:08:35.540 ], 00:08:35.540 "allow_any_host": true, 00:08:35.540 "hosts": [], 00:08:35.540 "serial_number": "SPDK00000000000002", 00:08:35.540 "model_number": "SPDK bdev Controller", 00:08:35.540 "max_namespaces": 32, 00:08:35.540 "min_cntlid": 1, 00:08:35.540 "max_cntlid": 65519, 00:08:35.540 "namespaces": [ 00:08:35.540 { 00:08:35.540 "nsid": 1, 00:08:35.540 "bdev_name": "Null2", 00:08:35.540 "name": "Null2", 00:08:35.540 "nguid": "655F4C6269164C58AD012289638ECBC5", 00:08:35.540 "uuid": "655f4c62-6916-4c58-ad01-2289638ecbc5" 00:08:35.540 } 00:08:35.540 ] 00:08:35.540 }, 00:08:35.540 { 00:08:35.540 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:35.540 "subtype": "NVMe", 00:08:35.540 "listen_addresses": [ 00:08:35.540 { 00:08:35.540 "trtype": "TCP", 00:08:35.540 "adrfam": "IPv4", 00:08:35.540 "traddr": "10.0.0.2", 00:08:35.540 "trsvcid": "4420" 00:08:35.540 } 00:08:35.540 ], 00:08:35.540 "allow_any_host": true, 00:08:35.540 "hosts": [], 00:08:35.540 "serial_number": "SPDK00000000000003", 00:08:35.540 "model_number": "SPDK bdev Controller", 00:08:35.540 "max_namespaces": 32, 00:08:35.540 "min_cntlid": 1, 00:08:35.540 "max_cntlid": 65519, 00:08:35.540 "namespaces": [ 00:08:35.540 { 00:08:35.540 "nsid": 1, 00:08:35.540 "bdev_name": "Null3", 00:08:35.540 "name": "Null3", 00:08:35.540 "nguid": "157CC1AF28E94A289B837B2DEB909477", 00:08:35.540 "uuid": "157cc1af-28e9-4a28-9b83-7b2deb909477" 00:08:35.540 } 00:08:35.540 ] 00:08:35.540 }, 00:08:35.540 { 00:08:35.540 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:35.540 "subtype": "NVMe", 00:08:35.540 "listen_addresses": [ 00:08:35.540 { 00:08:35.540 "trtype": "TCP", 00:08:35.540 "adrfam": "IPv4", 00:08:35.540 "traddr": "10.0.0.2", 00:08:35.540 "trsvcid": "4420" 00:08:35.540 } 00:08:35.540 ], 00:08:35.540 "allow_any_host": true, 00:08:35.540 "hosts": [], 00:08:35.540 "serial_number": "SPDK00000000000004", 00:08:35.540 "model_number": "SPDK bdev Controller", 00:08:35.540 "max_namespaces": 32, 00:08:35.540 "min_cntlid": 1, 00:08:35.540 "max_cntlid": 65519, 00:08:35.540 "namespaces": [ 00:08:35.540 { 00:08:35.540 "nsid": 1, 00:08:35.540 "bdev_name": "Null4", 00:08:35.540 "name": "Null4", 00:08:35.540 "nguid": "FCB1497F5CE6404899FC2CD76A62B170", 00:08:35.540 "uuid": "fcb1497f-5ce6-4048-99fc-2cd76a62b170" 00:08:35.540 } 00:08:35.540 ] 00:08:35.540 } 00:08:35.540 ] 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:35.540 19:24:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:35.541 19:24:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:35.541 19:24:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:35.541 19:24:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:35.541 19:24:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:35.541 19:24:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:35.541 19:24:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:35.541 rmmod nvme_tcp 00:08:35.541 rmmod nvme_fabrics 00:08:35.541 rmmod nvme_keyring 00:08:35.541 19:24:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:35.541 19:24:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:35.541 19:24:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:35.541 19:24:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3410177 ']' 00:08:35.541 19:24:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3410177 00:08:35.541 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 3410177 ']' 00:08:35.541 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 3410177 00:08:35.541 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:35.541 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:35.541 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3410177 00:08:35.802 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:35.802 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:35.802 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3410177' 00:08:35.802 killing process with pid 3410177 00:08:35.802 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 3410177 00:08:35.802 [2024-05-15 19:24:01.750956] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:35.802 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 3410177 00:08:35.802 19:24:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:35.802 19:24:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:35.802 19:24:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:35.802 19:24:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.802 19:24:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:35.802 19:24:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.802 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.802 19:24:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.347 19:24:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:38.347 00:08:38.347 real 0m12.344s 00:08:38.347 user 0m8.198s 00:08:38.347 sys 0m6.721s 00:08:38.347 19:24:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:38.347 19:24:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.347 ************************************ 00:08:38.347 END TEST nvmf_target_discovery 00:08:38.347 ************************************ 00:08:38.347 19:24:04 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:38.347 19:24:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:38.347 19:24:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:38.347 19:24:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:38.347 ************************************ 00:08:38.347 START TEST nvmf_referrals 00:08:38.347 ************************************ 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:38.347 * Looking for test storage... 00:08:38.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.347 19:24:04 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:38.348 19:24:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.490 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:46.490 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:46.490 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:46.490 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:46.490 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:46.490 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:46.490 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:46.490 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:46.490 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:46.490 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:46.490 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:46.491 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:46.491 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:46.491 Found net devices under 0000:31:00.0: cvl_0_0 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:46.491 Found net devices under 0000:31:00.1: cvl_0_1 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:46.491 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.752 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.752 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.752 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:46.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:08:46.752 00:08:46.752 --- 10.0.0.2 ping statistics --- 00:08:46.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.752 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:08:46.752 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.368 ms 00:08:46.752 00:08:46.752 --- 10.0.0.1 ping statistics --- 00:08:46.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.752 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:08:46.752 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.752 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:46.752 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:46.753 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.753 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:46.753 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:46.753 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.753 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:46.753 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:46.753 19:24:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:46.753 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:46.753 19:24:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:46.753 19:24:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.753 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3415803 00:08:46.753 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3415803 00:08:46.753 19:24:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:46.753 19:24:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 3415803 ']' 00:08:46.753 19:24:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.753 19:24:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:46.753 19:24:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.753 19:24:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:46.753 19:24:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:46.753 [2024-05-15 19:24:12.860993] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:08:46.753 [2024-05-15 19:24:12.861055] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.753 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.013 [2024-05-15 19:24:12.957831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:47.013 [2024-05-15 19:24:13.054846] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.013 [2024-05-15 19:24:13.054907] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.013 [2024-05-15 19:24:13.054915] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.013 [2024-05-15 19:24:13.054922] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.013 [2024-05-15 19:24:13.054928] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.013 [2024-05-15 19:24:13.055058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.013 [2024-05-15 19:24:13.055190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.013 [2024-05-15 19:24:13.055374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.013 [2024-05-15 19:24:13.055374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.585 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:47.585 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:47.585 19:24:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:47.585 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:47.585 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:47.845 19:24:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:47.846 [2024-05-15 19:24:13.795154] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:47.846 [2024-05-15 19:24:13.807131] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:47.846 [2024-05-15 19:24:13.807344] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:47.846 19:24:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.108 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:48.378 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:48.638 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:48.898 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:48.898 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:48.898 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:48.898 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:48.898 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:48.898 19:24:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:49.158 19:24:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:49.159 rmmod nvme_tcp 00:08:49.159 rmmod nvme_fabrics 00:08:49.159 rmmod nvme_keyring 00:08:49.159 19:24:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:49.419 19:24:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:49.419 19:24:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:49.419 19:24:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3415803 ']' 00:08:49.419 19:24:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3415803 00:08:49.419 19:24:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 3415803 ']' 00:08:49.419 19:24:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 3415803 00:08:49.419 19:24:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:49.419 19:24:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:49.419 19:24:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3415803 00:08:49.419 19:24:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:49.420 19:24:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:49.420 19:24:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3415803' 00:08:49.420 killing process with pid 3415803 00:08:49.420 19:24:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 3415803 00:08:49.420 [2024-05-15 19:24:15.400918] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:49.420 19:24:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 3415803 00:08:49.420 19:24:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:49.420 19:24:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:49.420 19:24:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:49.420 19:24:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:49.420 19:24:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:49.420 19:24:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.420 19:24:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.420 19:24:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.966 19:24:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:51.966 00:08:51.966 real 0m13.559s 00:08:51.966 user 0m13.230s 00:08:51.966 sys 0m7.232s 00:08:51.966 19:24:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:51.966 19:24:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:51.966 ************************************ 00:08:51.966 END TEST nvmf_referrals 00:08:51.966 ************************************ 00:08:51.966 19:24:17 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:51.966 19:24:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:51.966 19:24:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:51.966 19:24:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:51.966 ************************************ 00:08:51.966 START TEST nvmf_connect_disconnect 00:08:51.966 ************************************ 00:08:51.966 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:51.966 * Looking for test storage... 00:08:51.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:51.966 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:51.967 19:24:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:00.128 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:00.128 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.128 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:00.129 Found net devices under 0000:31:00.0: cvl_0_0 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:00.129 Found net devices under 0000:31:00.1: cvl_0_1 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:00.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:09:00.129 00:09:00.129 --- 10.0.0.2 ping statistics --- 00:09:00.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.129 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:09:00.129 00:09:00.129 --- 10.0.0.1 ping statistics --- 00:09:00.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.129 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3420951 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3420951 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 3420951 ']' 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:00.129 19:24:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:00.129 [2024-05-15 19:24:25.966065] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:09:00.129 [2024-05-15 19:24:25.966116] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.129 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.129 [2024-05-15 19:24:26.055798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:00.129 [2024-05-15 19:24:26.149714] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.129 [2024-05-15 19:24:26.149775] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.129 [2024-05-15 19:24:26.149784] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.129 [2024-05-15 19:24:26.149791] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.129 [2024-05-15 19:24:26.149797] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.129 [2024-05-15 19:24:26.149929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.129 [2024-05-15 19:24:26.150071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.129 [2024-05-15 19:24:26.150239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.129 [2024-05-15 19:24:26.150239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.700 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:00.700 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:09:00.700 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:00.700 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:00.700 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:00.700 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.700 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:00.700 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.700 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:00.961 [2024-05-15 19:24:26.886146] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.961 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.961 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:00.961 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.961 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:00.961 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.961 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:00.961 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:00.961 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.961 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:00.961 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.961 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:00.961 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.961 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:00.961 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.961 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.962 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.962 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:00.962 [2024-05-15 19:24:26.945308] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:00.962 [2024-05-15 19:24:26.945533] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.962 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.962 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:00.962 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:00.962 19:24:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:05.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.344 rmmod nvme_tcp 00:09:19.344 rmmod nvme_fabrics 00:09:19.344 rmmod nvme_keyring 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3420951 ']' 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3420951 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3420951 ']' 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 3420951 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3420951 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3420951' 00:09:19.344 killing process with pid 3420951 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 3420951 00:09:19.344 [2024-05-15 19:24:45.422675] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:19.344 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 3420951 00:09:19.604 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:19.604 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:19.604 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:19.604 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.604 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:19.604 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.604 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.604 19:24:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.518 19:24:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:21.518 00:09:21.518 real 0m29.944s 00:09:21.518 user 1m19.542s 00:09:21.518 sys 0m7.414s 00:09:21.518 19:24:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:21.518 19:24:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:21.518 ************************************ 00:09:21.518 END TEST nvmf_connect_disconnect 00:09:21.518 ************************************ 00:09:21.518 19:24:47 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:21.518 19:24:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:21.518 19:24:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:21.518 19:24:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:21.779 ************************************ 00:09:21.779 START TEST nvmf_multitarget 00:09:21.779 ************************************ 00:09:21.779 19:24:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:21.779 * Looking for test storage... 00:09:21.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.779 19:24:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.779 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:21.779 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.779 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.779 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.779 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:21.780 19:24:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:29.923 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:29.923 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:29.923 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:29.924 Found net devices under 0000:31:00.0: cvl_0_0 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:29.924 Found net devices under 0000:31:00.1: cvl_0_1 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.924 19:24:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.924 19:24:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.924 19:24:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.924 19:24:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:29.924 19:24:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:30.185 19:24:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:30.185 19:24:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:30.185 19:24:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:30.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:09:30.185 00:09:30.185 --- 10.0.0.2 ping statistics --- 00:09:30.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.185 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:30.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:09:30.186 00:09:30.186 --- 10.0.0.1 ping statistics --- 00:09:30.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.186 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3429729 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3429729 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 3429729 ']' 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:30.186 19:24:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:30.186 [2024-05-15 19:24:56.312617] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:09:30.186 [2024-05-15 19:24:56.312678] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.186 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.447 [2024-05-15 19:24:56.410210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.447 [2024-05-15 19:24:56.507761] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.447 [2024-05-15 19:24:56.507827] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.447 [2024-05-15 19:24:56.507842] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.447 [2024-05-15 19:24:56.507849] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.447 [2024-05-15 19:24:56.507855] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.447 [2024-05-15 19:24:56.508015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.447 [2024-05-15 19:24:56.508160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.447 [2024-05-15 19:24:56.508354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.447 [2024-05-15 19:24:56.508355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.017 19:24:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:31.017 19:24:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:09:31.017 19:24:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:31.017 19:24:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:31.017 19:24:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:31.278 19:24:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.278 19:24:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:31.278 19:24:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:31.278 19:24:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:31.278 19:24:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:31.278 19:24:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:31.278 "nvmf_tgt_1" 00:09:31.539 19:24:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:31.539 "nvmf_tgt_2" 00:09:31.539 19:24:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:31.539 19:24:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:31.539 19:24:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:31.539 19:24:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:31.800 true 00:09:31.800 19:24:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:31.800 true 00:09:31.800 19:24:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:31.800 19:24:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:32.060 rmmod nvme_tcp 00:09:32.060 rmmod nvme_fabrics 00:09:32.060 rmmod nvme_keyring 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3429729 ']' 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3429729 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 3429729 ']' 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 3429729 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3429729 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3429729' 00:09:32.060 killing process with pid 3429729 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 3429729 00:09:32.060 19:24:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 3429729 00:09:32.321 19:24:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:32.321 19:24:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:32.321 19:24:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:32.321 19:24:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:32.321 19:24:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:32.321 19:24:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.321 19:24:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:32.321 19:24:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.234 19:25:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:34.234 00:09:34.234 real 0m12.660s 00:09:34.234 user 0m10.408s 00:09:34.234 sys 0m6.772s 00:09:34.234 19:25:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:34.234 19:25:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:34.234 ************************************ 00:09:34.234 END TEST nvmf_multitarget 00:09:34.234 ************************************ 00:09:34.496 19:25:00 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:34.496 19:25:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:34.496 19:25:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:34.496 19:25:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:34.496 ************************************ 00:09:34.496 START TEST nvmf_rpc 00:09:34.496 ************************************ 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:34.496 * Looking for test storage... 00:09:34.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:34.496 19:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:42.635 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:42.635 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:42.636 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:42.636 Found net devices under 0000:31:00.0: cvl_0_0 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:42.636 Found net devices under 0000:31:00.1: cvl_0_1 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.636 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:42.897 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.897 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.897 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.897 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:42.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:09:42.897 00:09:42.897 --- 10.0.0.2 ping statistics --- 00:09:42.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.897 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:09:42.897 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:09:42.897 00:09:42.897 --- 10.0.0.1 ping statistics --- 00:09:42.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.897 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:09:42.897 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.897 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:42.897 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:42.897 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.897 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:42.897 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:42.897 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.897 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:42.897 19:25:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:42.897 19:25:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:42.897 19:25:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:42.897 19:25:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:42.897 19:25:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.897 19:25:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3434784 00:09:42.897 19:25:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3434784 00:09:42.897 19:25:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:42.897 19:25:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 3434784 ']' 00:09:42.897 19:25:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.897 19:25:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:42.897 19:25:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.897 19:25:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:42.897 19:25:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.157 [2024-05-15 19:25:09.083931] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:09:43.157 [2024-05-15 19:25:09.083981] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.157 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.157 [2024-05-15 19:25:09.176543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.157 [2024-05-15 19:25:09.265515] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.157 [2024-05-15 19:25:09.265582] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.157 [2024-05-15 19:25:09.265596] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.157 [2024-05-15 19:25:09.265604] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.157 [2024-05-15 19:25:09.265610] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.157 [2024-05-15 19:25:09.265752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.157 [2024-05-15 19:25:09.265900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.157 [2024-05-15 19:25:09.266067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.157 [2024-05-15 19:25:09.266068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:44.099 19:25:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:44.099 19:25:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:09:44.099 19:25:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:44.099 19:25:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:44.099 19:25:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:44.099 "tick_rate": 2400000000, 00:09:44.099 "poll_groups": [ 00:09:44.099 { 00:09:44.099 "name": "nvmf_tgt_poll_group_000", 00:09:44.099 "admin_qpairs": 0, 00:09:44.099 "io_qpairs": 0, 00:09:44.099 "current_admin_qpairs": 0, 00:09:44.099 "current_io_qpairs": 0, 00:09:44.099 "pending_bdev_io": 0, 00:09:44.099 "completed_nvme_io": 0, 00:09:44.099 "transports": [] 00:09:44.099 }, 00:09:44.099 { 00:09:44.099 "name": "nvmf_tgt_poll_group_001", 00:09:44.099 "admin_qpairs": 0, 00:09:44.099 "io_qpairs": 0, 00:09:44.099 "current_admin_qpairs": 0, 00:09:44.099 "current_io_qpairs": 0, 00:09:44.099 "pending_bdev_io": 0, 00:09:44.099 "completed_nvme_io": 0, 00:09:44.099 "transports": [] 00:09:44.099 }, 00:09:44.099 { 00:09:44.099 "name": "nvmf_tgt_poll_group_002", 00:09:44.099 "admin_qpairs": 0, 00:09:44.099 "io_qpairs": 0, 00:09:44.099 "current_admin_qpairs": 0, 00:09:44.099 "current_io_qpairs": 0, 00:09:44.099 "pending_bdev_io": 0, 00:09:44.099 "completed_nvme_io": 0, 00:09:44.099 "transports": [] 00:09:44.099 }, 00:09:44.099 { 00:09:44.099 "name": "nvmf_tgt_poll_group_003", 00:09:44.099 "admin_qpairs": 0, 00:09:44.099 "io_qpairs": 0, 00:09:44.099 "current_admin_qpairs": 0, 00:09:44.099 "current_io_qpairs": 0, 00:09:44.099 "pending_bdev_io": 0, 00:09:44.099 "completed_nvme_io": 0, 00:09:44.099 "transports": [] 00:09:44.099 } 00:09:44.099 ] 00:09:44.099 }' 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.099 [2024-05-15 19:25:10.122463] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.099 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:44.099 "tick_rate": 2400000000, 00:09:44.099 "poll_groups": [ 00:09:44.099 { 00:09:44.099 "name": "nvmf_tgt_poll_group_000", 00:09:44.099 "admin_qpairs": 0, 00:09:44.099 "io_qpairs": 0, 00:09:44.099 "current_admin_qpairs": 0, 00:09:44.099 "current_io_qpairs": 0, 00:09:44.099 "pending_bdev_io": 0, 00:09:44.099 "completed_nvme_io": 0, 00:09:44.099 "transports": [ 00:09:44.099 { 00:09:44.099 "trtype": "TCP" 00:09:44.099 } 00:09:44.099 ] 00:09:44.099 }, 00:09:44.099 { 00:09:44.099 "name": "nvmf_tgt_poll_group_001", 00:09:44.099 "admin_qpairs": 0, 00:09:44.099 "io_qpairs": 0, 00:09:44.099 "current_admin_qpairs": 0, 00:09:44.100 "current_io_qpairs": 0, 00:09:44.100 "pending_bdev_io": 0, 00:09:44.100 "completed_nvme_io": 0, 00:09:44.100 "transports": [ 00:09:44.100 { 00:09:44.100 "trtype": "TCP" 00:09:44.100 } 00:09:44.100 ] 00:09:44.100 }, 00:09:44.100 { 00:09:44.100 "name": "nvmf_tgt_poll_group_002", 00:09:44.100 "admin_qpairs": 0, 00:09:44.100 "io_qpairs": 0, 00:09:44.100 "current_admin_qpairs": 0, 00:09:44.100 "current_io_qpairs": 0, 00:09:44.100 "pending_bdev_io": 0, 00:09:44.100 "completed_nvme_io": 0, 00:09:44.100 "transports": [ 00:09:44.100 { 00:09:44.100 "trtype": "TCP" 00:09:44.100 } 00:09:44.100 ] 00:09:44.100 }, 00:09:44.100 { 00:09:44.100 "name": "nvmf_tgt_poll_group_003", 00:09:44.100 "admin_qpairs": 0, 00:09:44.100 "io_qpairs": 0, 00:09:44.100 "current_admin_qpairs": 0, 00:09:44.100 "current_io_qpairs": 0, 00:09:44.100 "pending_bdev_io": 0, 00:09:44.100 "completed_nvme_io": 0, 00:09:44.100 "transports": [ 00:09:44.100 { 00:09:44.100 "trtype": "TCP" 00:09:44.100 } 00:09:44.100 ] 00:09:44.100 } 00:09:44.100 ] 00:09:44.100 }' 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.100 Malloc1 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.100 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.360 [2024-05-15 19:25:10.313889] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:44.360 [2024-05-15 19:25:10.314146] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:44.360 [2024-05-15 19:25:10.340688] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:09:44.360 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:44.360 could not add new controller: failed to write to nvme-fabrics device 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.360 19:25:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:45.744 19:25:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:45.744 19:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:45.744 19:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:45.745 19:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:45.745 19:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:48.285 19:25:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:48.285 19:25:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:48.285 19:25:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:48.285 19:25:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:48.285 19:25:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:48.285 19:25:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:48.285 19:25:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:48.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.285 19:25:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:48.285 19:25:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:48.285 19:25:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:48.285 19:25:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:48.285 [2024-05-15 19:25:14.054391] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:09:48.285 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:48.285 could not add new controller: failed to write to nvme-fabrics device 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.285 19:25:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:49.668 19:25:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:49.668 19:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:49.668 19:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:49.668 19:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:49.668 19:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:51.580 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:51.580 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:51.580 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:51.580 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:51.580 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:51.580 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:51.580 19:25:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:51.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.841 19:25:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:51.841 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:51.841 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:51.841 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.841 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:51.841 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.841 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:51.841 19:25:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:51.841 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.841 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.841 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.841 19:25:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:51.841 19:25:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:51.841 19:25:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:51.841 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.842 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.842 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.842 19:25:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.842 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.842 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.842 [2024-05-15 19:25:17.897912] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.842 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.842 19:25:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:51.842 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.842 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.842 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.842 19:25:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:51.842 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.842 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.842 19:25:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.842 19:25:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:53.225 19:25:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:53.225 19:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:53.225 19:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:53.225 19:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:53.225 19:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:55.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.838 [2024-05-15 19:25:21.664727] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.838 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.839 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.839 19:25:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:55.839 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.839 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.839 19:25:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.839 19:25:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:57.221 19:25:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:57.221 19:25:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:57.221 19:25:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:57.221 19:25:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:57.221 19:25:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:59.132 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:59.132 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:59.132 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:59.132 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:59.132 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:59.132 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:59.132 19:25:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:59.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.392 [2024-05-15 19:25:25.431679] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.392 19:25:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:00.774 19:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:00.774 19:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:10:00.774 19:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:00.774 19:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:10:00.774 19:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:10:03.321 19:25:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:03.321 19:25:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:03.321 19:25:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:03.321 19:25:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:10:03.321 19:25:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:03.321 19:25:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:10:03.321 19:25:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:03.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.321 [2024-05-15 19:25:29.255463] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.321 19:25:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:04.705 19:25:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:04.705 19:25:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:10:04.705 19:25:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:04.705 19:25:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:10:04.705 19:25:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:10:06.616 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:06.616 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:06.616 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:06.616 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:10:06.616 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:06.616 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:10:06.616 19:25:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:06.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.876 [2024-05-15 19:25:32.972922] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:06.876 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.877 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.877 19:25:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.877 19:25:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:08.787 19:25:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:08.787 19:25:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:10:08.787 19:25:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:08.787 19:25:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:10:08.787 19:25:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:10.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.697 [2024-05-15 19:25:36.690752] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.697 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.697 [2024-05-15 19:25:36.754889] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.698 [2024-05-15 19:25:36.811060] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.698 [2024-05-15 19:25:36.867261] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.698 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.957 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.957 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:10.957 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.957 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.958 [2024-05-15 19:25:36.931477] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:10.958 "tick_rate": 2400000000, 00:10:10.958 "poll_groups": [ 00:10:10.958 { 00:10:10.958 "name": "nvmf_tgt_poll_group_000", 00:10:10.958 "admin_qpairs": 0, 00:10:10.958 "io_qpairs": 224, 00:10:10.958 "current_admin_qpairs": 0, 00:10:10.958 "current_io_qpairs": 0, 00:10:10.958 "pending_bdev_io": 0, 00:10:10.958 "completed_nvme_io": 274, 00:10:10.958 "transports": [ 00:10:10.958 { 00:10:10.958 "trtype": "TCP" 00:10:10.958 } 00:10:10.958 ] 00:10:10.958 }, 00:10:10.958 { 00:10:10.958 "name": "nvmf_tgt_poll_group_001", 00:10:10.958 "admin_qpairs": 1, 00:10:10.958 "io_qpairs": 223, 00:10:10.958 "current_admin_qpairs": 0, 00:10:10.958 "current_io_qpairs": 0, 00:10:10.958 "pending_bdev_io": 0, 00:10:10.958 "completed_nvme_io": 473, 00:10:10.958 "transports": [ 00:10:10.958 { 00:10:10.958 "trtype": "TCP" 00:10:10.958 } 00:10:10.958 ] 00:10:10.958 }, 00:10:10.958 { 00:10:10.958 "name": "nvmf_tgt_poll_group_002", 00:10:10.958 "admin_qpairs": 6, 00:10:10.958 "io_qpairs": 218, 00:10:10.958 "current_admin_qpairs": 0, 00:10:10.958 "current_io_qpairs": 0, 00:10:10.958 "pending_bdev_io": 0, 00:10:10.958 "completed_nvme_io": 267, 00:10:10.958 "transports": [ 00:10:10.958 { 00:10:10.958 "trtype": "TCP" 00:10:10.958 } 00:10:10.958 ] 00:10:10.958 }, 00:10:10.958 { 00:10:10.958 "name": "nvmf_tgt_poll_group_003", 00:10:10.958 "admin_qpairs": 0, 00:10:10.958 "io_qpairs": 224, 00:10:10.958 "current_admin_qpairs": 0, 00:10:10.958 "current_io_qpairs": 0, 00:10:10.958 "pending_bdev_io": 0, 00:10:10.958 "completed_nvme_io": 225, 00:10:10.958 "transports": [ 00:10:10.958 { 00:10:10.958 "trtype": "TCP" 00:10:10.958 } 00:10:10.958 ] 00:10:10.958 } 00:10:10.958 ] 00:10:10.958 }' 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:10.958 19:25:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:10.958 19:25:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:10.958 19:25:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:10.958 19:25:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:10.958 19:25:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:10.958 19:25:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:10.958 19:25:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:10:10.958 19:25:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:10.958 19:25:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:10.958 19:25:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:10.958 19:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:10.958 19:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:10.958 19:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:10.958 19:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:10.958 19:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:10.958 19:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:10.958 rmmod nvme_tcp 00:10:10.958 rmmod nvme_fabrics 00:10:10.958 rmmod nvme_keyring 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3434784 ']' 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3434784 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 3434784 ']' 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 3434784 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3434784 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3434784' 00:10:11.219 killing process with pid 3434784 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 3434784 00:10:11.219 [2024-05-15 19:25:37.212860] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 3434784 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:11.219 19:25:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.762 19:25:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:13.762 00:10:13.762 real 0m38.957s 00:10:13.762 user 1m54.532s 00:10:13.762 sys 0m8.316s 00:10:13.762 19:25:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:13.762 19:25:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.762 ************************************ 00:10:13.762 END TEST nvmf_rpc 00:10:13.762 ************************************ 00:10:13.762 19:25:39 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:13.762 19:25:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:13.762 19:25:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:13.762 19:25:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:13.762 ************************************ 00:10:13.762 START TEST nvmf_invalid 00:10:13.762 ************************************ 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:13.762 * Looking for test storage... 00:10:13.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:13.762 19:25:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:21.900 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:21.900 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.900 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:21.900 Found net devices under 0000:31:00.0: cvl_0_0 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:21.901 Found net devices under 0000:31:00.1: cvl_0_1 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:21.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:10:21.901 00:10:21.901 --- 10.0.0.2 ping statistics --- 00:10:21.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.901 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:10:21.901 00:10:21.901 --- 10.0.0.1 ping statistics --- 00:10:21.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.901 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3445254 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3445254 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 3445254 ']' 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:21.901 19:25:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:21.901 [2024-05-15 19:25:47.970751] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:10:21.901 [2024-05-15 19:25:47.970798] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.901 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.901 [2024-05-15 19:25:48.064180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.181 [2024-05-15 19:25:48.135485] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.181 [2024-05-15 19:25:48.135534] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.181 [2024-05-15 19:25:48.135542] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.181 [2024-05-15 19:25:48.135549] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.181 [2024-05-15 19:25:48.135554] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.181 [2024-05-15 19:25:48.135674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.181 [2024-05-15 19:25:48.135810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.181 [2024-05-15 19:25:48.135969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.182 [2024-05-15 19:25:48.135970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.755 19:25:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:22.755 19:25:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:10:22.755 19:25:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:22.755 19:25:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:22.755 19:25:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:22.755 19:25:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.755 19:25:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:22.755 19:25:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode23766 00:10:23.016 [2024-05-15 19:25:49.072652] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:23.016 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:23.016 { 00:10:23.016 "nqn": "nqn.2016-06.io.spdk:cnode23766", 00:10:23.016 "tgt_name": "foobar", 00:10:23.016 "method": "nvmf_create_subsystem", 00:10:23.016 "req_id": 1 00:10:23.016 } 00:10:23.016 Got JSON-RPC error response 00:10:23.016 response: 00:10:23.016 { 00:10:23.016 "code": -32603, 00:10:23.016 "message": "Unable to find target foobar" 00:10:23.016 }' 00:10:23.016 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:23.016 { 00:10:23.016 "nqn": "nqn.2016-06.io.spdk:cnode23766", 00:10:23.016 "tgt_name": "foobar", 00:10:23.016 "method": "nvmf_create_subsystem", 00:10:23.016 "req_id": 1 00:10:23.016 } 00:10:23.016 Got JSON-RPC error response 00:10:23.016 response: 00:10:23.016 { 00:10:23.016 "code": -32603, 00:10:23.016 "message": "Unable to find target foobar" 00:10:23.016 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:23.016 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:23.016 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17619 00:10:23.277 [2024-05-15 19:25:49.301460] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17619: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:23.277 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:23.277 { 00:10:23.277 "nqn": "nqn.2016-06.io.spdk:cnode17619", 00:10:23.277 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:23.277 "method": "nvmf_create_subsystem", 00:10:23.277 "req_id": 1 00:10:23.277 } 00:10:23.277 Got JSON-RPC error response 00:10:23.277 response: 00:10:23.277 { 00:10:23.277 "code": -32602, 00:10:23.277 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:23.277 }' 00:10:23.277 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:23.277 { 00:10:23.277 "nqn": "nqn.2016-06.io.spdk:cnode17619", 00:10:23.277 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:23.277 "method": "nvmf_create_subsystem", 00:10:23.277 "req_id": 1 00:10:23.277 } 00:10:23.277 Got JSON-RPC error response 00:10:23.277 response: 00:10:23.277 { 00:10:23.277 "code": -32602, 00:10:23.277 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:23.277 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:23.277 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:23.277 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4407 00:10:23.537 [2024-05-15 19:25:49.526159] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4407: invalid model number 'SPDK_Controller' 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:23.537 { 00:10:23.537 "nqn": "nqn.2016-06.io.spdk:cnode4407", 00:10:23.537 "model_number": "SPDK_Controller\u001f", 00:10:23.537 "method": "nvmf_create_subsystem", 00:10:23.537 "req_id": 1 00:10:23.537 } 00:10:23.537 Got JSON-RPC error response 00:10:23.537 response: 00:10:23.537 { 00:10:23.537 "code": -32602, 00:10:23.537 "message": "Invalid MN SPDK_Controller\u001f" 00:10:23.537 }' 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:23.537 { 00:10:23.537 "nqn": "nqn.2016-06.io.spdk:cnode4407", 00:10:23.537 "model_number": "SPDK_Controller\u001f", 00:10:23.537 "method": "nvmf_create_subsystem", 00:10:23.537 "req_id": 1 00:10:23.537 } 00:10:23.537 Got JSON-RPC error response 00:10:23.537 response: 00:10:23.537 { 00:10:23.537 "code": -32602, 00:10:23.537 "message": "Invalid MN SPDK_Controller\u001f" 00:10:23.537 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:23.537 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ d == \- ]] 00:10:23.538 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'dt>x_m\j$:k&1J="C0\3d' 00:10:23.798 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'dt>x_m\j$:k&1J="C0\3d' nqn.2016-06.io.spdk:cnode22724 00:10:23.798 [2024-05-15 19:25:49.911385] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22724: invalid serial number 'dt>x_m\j$:k&1J="C0\3d' 00:10:23.798 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:23.798 { 00:10:23.798 "nqn": "nqn.2016-06.io.spdk:cnode22724", 00:10:23.798 "serial_number": "dt>x_m\\j$:k&1J=\"C0\\3d", 00:10:23.798 "method": "nvmf_create_subsystem", 00:10:23.798 "req_id": 1 00:10:23.798 } 00:10:23.798 Got JSON-RPC error response 00:10:23.798 response: 00:10:23.798 { 00:10:23.798 "code": -32602, 00:10:23.798 "message": "Invalid SN dt>x_m\\j$:k&1J=\"C0\\3d" 00:10:23.798 }' 00:10:23.798 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:23.798 { 00:10:23.798 "nqn": "nqn.2016-06.io.spdk:cnode22724", 00:10:23.798 "serial_number": "dt>x_m\\j$:k&1J=\"C0\\3d", 00:10:23.798 "method": "nvmf_create_subsystem", 00:10:23.798 "req_id": 1 00:10:23.798 } 00:10:23.798 Got JSON-RPC error response 00:10:23.798 response: 00:10:23.798 { 00:10:23.798 "code": -32602, 00:10:23.798 "message": "Invalid SN dt>x_m\\j$:k&1J=\"C0\\3d" 00:10:23.798 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:23.798 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:23.798 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:23.798 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:23.798 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:23.798 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:23.798 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:23.799 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:10:24.061 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:10:24.061 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:10:24.061 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.061 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.061 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:10:24.061 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:24.061 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:10:24.061 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.061 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.061 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:10:24.061 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:10:24.061 19:25:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:10:24.061 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.062 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:10:24.323 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:24.323 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:10:24.323 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.323 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.323 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:10:24.323 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:10:24.323 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:10:24.323 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:24.323 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:24.323 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ T == \- ]] 00:10:24.323 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'TuZ(R,SX:}O;.`j\W}INYCa#2&T8]}#NN`1(,^( -' 00:10:24.323 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'TuZ(R,SX:}O;.`j\W}INYCa#2&T8]}#NN`1(,^( -' nqn.2016-06.io.spdk:cnode28755 00:10:24.323 [2024-05-15 19:25:50.449134] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28755: invalid model number 'TuZ(R,SX:}O;.`j\W}INYCa#2&T8]}#NN`1(,^( -' 00:10:24.323 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:10:24.324 { 00:10:24.324 "nqn": "nqn.2016-06.io.spdk:cnode28755", 00:10:24.324 "model_number": "TuZ(R,SX:}O;.`j\\W}INYCa#2&T8]}#NN`1(,^( -", 00:10:24.324 "method": "nvmf_create_subsystem", 00:10:24.324 "req_id": 1 00:10:24.324 } 00:10:24.324 Got JSON-RPC error response 00:10:24.324 response: 00:10:24.324 { 00:10:24.324 "code": -32602, 00:10:24.324 "message": "Invalid MN TuZ(R,SX:}O;.`j\\W}INYCa#2&T8]}#NN`1(,^( -" 00:10:24.324 }' 00:10:24.324 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:10:24.324 { 00:10:24.324 "nqn": "nqn.2016-06.io.spdk:cnode28755", 00:10:24.324 "model_number": "TuZ(R,SX:}O;.`j\\W}INYCa#2&T8]}#NN`1(,^( -", 00:10:24.324 "method": "nvmf_create_subsystem", 00:10:24.324 "req_id": 1 00:10:24.324 } 00:10:24.324 Got JSON-RPC error response 00:10:24.324 response: 00:10:24.324 { 00:10:24.324 "code": -32602, 00:10:24.324 "message": "Invalid MN TuZ(R,SX:}O;.`j\\W}INYCa#2&T8]}#NN`1(,^( -" 00:10:24.324 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:24.324 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:24.583 [2024-05-15 19:25:50.669947] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:24.583 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:24.842 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:24.842 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:10:24.842 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:24.842 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:10:24.842 19:25:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:25.101 [2024-05-15 19:25:51.127387] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:25.101 [2024-05-15 19:25:51.127461] nvmf_rpc.c: 794:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:25.101 19:25:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:25.101 { 00:10:25.101 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:25.101 "listen_address": { 00:10:25.101 "trtype": "tcp", 00:10:25.101 "traddr": "", 00:10:25.101 "trsvcid": "4421" 00:10:25.101 }, 00:10:25.101 "method": "nvmf_subsystem_remove_listener", 00:10:25.101 "req_id": 1 00:10:25.101 } 00:10:25.101 Got JSON-RPC error response 00:10:25.101 response: 00:10:25.101 { 00:10:25.101 "code": -32602, 00:10:25.101 "message": "Invalid parameters" 00:10:25.101 }' 00:10:25.101 19:25:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:25.101 { 00:10:25.101 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:25.101 "listen_address": { 00:10:25.101 "trtype": "tcp", 00:10:25.101 "traddr": "", 00:10:25.101 "trsvcid": "4421" 00:10:25.101 }, 00:10:25.101 "method": "nvmf_subsystem_remove_listener", 00:10:25.101 "req_id": 1 00:10:25.101 } 00:10:25.101 Got JSON-RPC error response 00:10:25.101 response: 00:10:25.101 { 00:10:25.101 "code": -32602, 00:10:25.101 "message": "Invalid parameters" 00:10:25.101 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:25.102 19:25:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13066 -i 0 00:10:25.362 [2024-05-15 19:25:51.352099] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13066: invalid cntlid range [0-65519] 00:10:25.362 19:25:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:25.362 { 00:10:25.362 "nqn": "nqn.2016-06.io.spdk:cnode13066", 00:10:25.362 "min_cntlid": 0, 00:10:25.362 "method": "nvmf_create_subsystem", 00:10:25.362 "req_id": 1 00:10:25.362 } 00:10:25.362 Got JSON-RPC error response 00:10:25.362 response: 00:10:25.362 { 00:10:25.362 "code": -32602, 00:10:25.362 "message": "Invalid cntlid range [0-65519]" 00:10:25.362 }' 00:10:25.362 19:25:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:25.362 { 00:10:25.362 "nqn": "nqn.2016-06.io.spdk:cnode13066", 00:10:25.362 "min_cntlid": 0, 00:10:25.362 "method": "nvmf_create_subsystem", 00:10:25.362 "req_id": 1 00:10:25.362 } 00:10:25.362 Got JSON-RPC error response 00:10:25.362 response: 00:10:25.362 { 00:10:25.362 "code": -32602, 00:10:25.362 "message": "Invalid cntlid range [0-65519]" 00:10:25.362 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:25.362 19:25:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4680 -i 65520 00:10:25.623 [2024-05-15 19:25:51.576841] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4680: invalid cntlid range [65520-65519] 00:10:25.623 19:25:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:10:25.623 { 00:10:25.623 "nqn": "nqn.2016-06.io.spdk:cnode4680", 00:10:25.623 "min_cntlid": 65520, 00:10:25.623 "method": "nvmf_create_subsystem", 00:10:25.623 "req_id": 1 00:10:25.623 } 00:10:25.623 Got JSON-RPC error response 00:10:25.623 response: 00:10:25.623 { 00:10:25.623 "code": -32602, 00:10:25.623 "message": "Invalid cntlid range [65520-65519]" 00:10:25.623 }' 00:10:25.623 19:25:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:10:25.623 { 00:10:25.623 "nqn": "nqn.2016-06.io.spdk:cnode4680", 00:10:25.623 "min_cntlid": 65520, 00:10:25.623 "method": "nvmf_create_subsystem", 00:10:25.623 "req_id": 1 00:10:25.623 } 00:10:25.623 Got JSON-RPC error response 00:10:25.623 response: 00:10:25.623 { 00:10:25.623 "code": -32602, 00:10:25.623 "message": "Invalid cntlid range [65520-65519]" 00:10:25.623 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:25.623 19:25:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11291 -I 0 00:10:25.623 [2024-05-15 19:25:51.801594] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11291: invalid cntlid range [1-0] 00:10:25.883 19:25:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:10:25.883 { 00:10:25.883 "nqn": "nqn.2016-06.io.spdk:cnode11291", 00:10:25.883 "max_cntlid": 0, 00:10:25.883 "method": "nvmf_create_subsystem", 00:10:25.883 "req_id": 1 00:10:25.883 } 00:10:25.883 Got JSON-RPC error response 00:10:25.883 response: 00:10:25.883 { 00:10:25.883 "code": -32602, 00:10:25.883 "message": "Invalid cntlid range [1-0]" 00:10:25.883 }' 00:10:25.883 19:25:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:10:25.883 { 00:10:25.883 "nqn": "nqn.2016-06.io.spdk:cnode11291", 00:10:25.883 "max_cntlid": 0, 00:10:25.883 "method": "nvmf_create_subsystem", 00:10:25.883 "req_id": 1 00:10:25.883 } 00:10:25.883 Got JSON-RPC error response 00:10:25.883 response: 00:10:25.883 { 00:10:25.883 "code": -32602, 00:10:25.883 "message": "Invalid cntlid range [1-0]" 00:10:25.883 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:25.883 19:25:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26881 -I 65520 00:10:25.883 [2024-05-15 19:25:52.022303] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26881: invalid cntlid range [1-65520] 00:10:25.883 19:25:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:10:25.883 { 00:10:25.883 "nqn": "nqn.2016-06.io.spdk:cnode26881", 00:10:25.883 "max_cntlid": 65520, 00:10:25.883 "method": "nvmf_create_subsystem", 00:10:25.883 "req_id": 1 00:10:25.883 } 00:10:25.883 Got JSON-RPC error response 00:10:25.883 response: 00:10:25.883 { 00:10:25.883 "code": -32602, 00:10:25.883 "message": "Invalid cntlid range [1-65520]" 00:10:25.883 }' 00:10:25.883 19:25:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:10:25.883 { 00:10:25.883 "nqn": "nqn.2016-06.io.spdk:cnode26881", 00:10:25.883 "max_cntlid": 65520, 00:10:25.883 "method": "nvmf_create_subsystem", 00:10:25.883 "req_id": 1 00:10:25.883 } 00:10:25.883 Got JSON-RPC error response 00:10:25.883 response: 00:10:25.883 { 00:10:25.883 "code": -32602, 00:10:25.883 "message": "Invalid cntlid range [1-65520]" 00:10:25.883 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:25.883 19:25:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9465 -i 6 -I 5 00:10:26.143 [2024-05-15 19:25:52.238996] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9465: invalid cntlid range [6-5] 00:10:26.143 19:25:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:10:26.143 { 00:10:26.143 "nqn": "nqn.2016-06.io.spdk:cnode9465", 00:10:26.143 "min_cntlid": 6, 00:10:26.143 "max_cntlid": 5, 00:10:26.143 "method": "nvmf_create_subsystem", 00:10:26.143 "req_id": 1 00:10:26.143 } 00:10:26.143 Got JSON-RPC error response 00:10:26.143 response: 00:10:26.143 { 00:10:26.143 "code": -32602, 00:10:26.143 "message": "Invalid cntlid range [6-5]" 00:10:26.143 }' 00:10:26.143 19:25:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:10:26.143 { 00:10:26.143 "nqn": "nqn.2016-06.io.spdk:cnode9465", 00:10:26.143 "min_cntlid": 6, 00:10:26.143 "max_cntlid": 5, 00:10:26.143 "method": "nvmf_create_subsystem", 00:10:26.143 "req_id": 1 00:10:26.143 } 00:10:26.143 Got JSON-RPC error response 00:10:26.143 response: 00:10:26.143 { 00:10:26.143 "code": -32602, 00:10:26.143 "message": "Invalid cntlid range [6-5]" 00:10:26.143 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:26.143 19:25:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:26.403 { 00:10:26.403 "name": "foobar", 00:10:26.403 "method": "nvmf_delete_target", 00:10:26.403 "req_id": 1 00:10:26.403 } 00:10:26.403 Got JSON-RPC error response 00:10:26.403 response: 00:10:26.403 { 00:10:26.403 "code": -32602, 00:10:26.403 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:26.403 }' 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:26.403 { 00:10:26.403 "name": "foobar", 00:10:26.403 "method": "nvmf_delete_target", 00:10:26.403 "req_id": 1 00:10:26.403 } 00:10:26.403 Got JSON-RPC error response 00:10:26.403 response: 00:10:26.403 { 00:10:26.403 "code": -32602, 00:10:26.403 "message": "The specified target doesn't exist, cannot delete it." 00:10:26.403 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:26.403 rmmod nvme_tcp 00:10:26.403 rmmod nvme_fabrics 00:10:26.403 rmmod nvme_keyring 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3445254 ']' 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3445254 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 3445254 ']' 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 3445254 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3445254 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3445254' 00:10:26.403 killing process with pid 3445254 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 3445254 00:10:26.403 [2024-05-15 19:25:52.492414] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:26.403 19:25:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 3445254 00:10:26.663 19:25:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:26.663 19:25:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:26.663 19:25:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:26.663 19:25:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:26.663 19:25:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:26.663 19:25:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.663 19:25:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:26.663 19:25:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.571 19:25:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:28.571 00:10:28.571 real 0m15.188s 00:10:28.571 user 0m23.118s 00:10:28.571 sys 0m7.099s 00:10:28.571 19:25:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:28.571 19:25:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:28.571 ************************************ 00:10:28.571 END TEST nvmf_invalid 00:10:28.571 ************************************ 00:10:28.571 19:25:54 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:28.571 19:25:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:28.571 19:25:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:28.571 19:25:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:28.834 ************************************ 00:10:28.834 START TEST nvmf_abort 00:10:28.834 ************************************ 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:28.834 * Looking for test storage... 00:10:28.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:28.834 19:25:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:37.024 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:37.024 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:37.025 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:37.025 Found net devices under 0000:31:00.0: cvl_0_0 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:37.025 Found net devices under 0000:31:00.1: cvl_0_1 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:37.025 19:26:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.025 19:26:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.025 19:26:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.025 19:26:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:37.025 19:26:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.025 19:26:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.025 19:26:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.025 19:26:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:37.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:10:37.025 00:10:37.025 --- 10.0.0.2 ping statistics --- 00:10:37.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.025 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:10:37.025 19:26:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:10:37.286 00:10:37.286 --- 10.0.0.1 ping statistics --- 00:10:37.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.286 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3450926 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3450926 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 3450926 ']' 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:37.286 19:26:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:37.286 [2024-05-15 19:26:03.317739] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:10:37.286 [2024-05-15 19:26:03.317802] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.286 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.286 [2024-05-15 19:26:03.395253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:37.286 [2024-05-15 19:26:03.468211] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.286 [2024-05-15 19:26:03.468251] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.286 [2024-05-15 19:26:03.468259] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.286 [2024-05-15 19:26:03.468270] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.286 [2024-05-15 19:26:03.468276] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.286 [2024-05-15 19:26:03.468379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.286 [2024-05-15 19:26:03.468697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.286 [2024-05-15 19:26:03.468698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:38.227 [2024-05-15 19:26:04.248981] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:38.227 Malloc0 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:38.227 Delay0 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:38.227 [2024-05-15 19:26:04.330566] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:38.227 [2024-05-15 19:26:04.330786] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.227 19:26:04 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:38.227 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.485 [2024-05-15 19:26:04.458976] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:41.023 Initializing NVMe Controllers 00:10:41.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:41.023 controller IO queue size 128 less than required 00:10:41.023 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:41.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:41.023 Initialization complete. Launching workers. 00:10:41.023 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32574 00:10:41.023 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32635, failed to submit 62 00:10:41.023 success 32578, unsuccess 57, failed 0 00:10:41.023 19:26:06 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:41.023 19:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.023 19:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:41.023 19:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.023 19:26:06 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:41.023 19:26:06 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:41.023 19:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:41.023 19:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:41.023 19:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:41.023 19:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:41.023 19:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:41.023 19:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:41.023 rmmod nvme_tcp 00:10:41.023 rmmod nvme_fabrics 00:10:41.023 rmmod nvme_keyring 00:10:41.023 19:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:41.023 19:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:41.023 19:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:41.023 19:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3450926 ']' 00:10:41.023 19:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3450926 00:10:41.023 19:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 3450926 ']' 00:10:41.024 19:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 3450926 00:10:41.024 19:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:10:41.024 19:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:41.024 19:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3450926 00:10:41.024 19:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:41.024 19:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:41.024 19:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3450926' 00:10:41.024 killing process with pid 3450926 00:10:41.024 19:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 3450926 00:10:41.024 [2024-05-15 19:26:06.755723] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:41.024 19:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 3450926 00:10:41.024 19:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:41.024 19:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:41.024 19:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:41.024 19:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:41.024 19:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:41.024 19:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.024 19:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:41.024 19:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.935 19:26:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:42.935 00:10:42.935 real 0m14.180s 00:10:42.936 user 0m14.466s 00:10:42.936 sys 0m7.131s 00:10:42.936 19:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:42.936 19:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:42.936 ************************************ 00:10:42.936 END TEST nvmf_abort 00:10:42.936 ************************************ 00:10:42.936 19:26:09 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:42.936 19:26:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:42.936 19:26:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:42.936 19:26:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:42.936 ************************************ 00:10:42.936 START TEST nvmf_ns_hotplug_stress 00:10:42.936 ************************************ 00:10:42.936 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:43.196 * Looking for test storage... 00:10:43.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:43.196 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:43.197 19:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.331 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:51.332 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:51.332 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:51.332 Found net devices under 0000:31:00.0: cvl_0_0 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:51.332 Found net devices under 0000:31:00.1: cvl_0_1 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:51.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:10:51.332 00:10:51.332 --- 10.0.0.2 ping statistics --- 00:10:51.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.332 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:10:51.332 00:10:51.332 --- 10.0.0.1 ping statistics --- 00:10:51.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.332 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3456384 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3456384 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:51.332 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 3456384 ']' 00:10:51.333 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.333 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:51.333 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.333 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:51.333 19:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.594 [2024-05-15 19:26:17.559430] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:10:51.594 [2024-05-15 19:26:17.559501] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.594 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.594 [2024-05-15 19:26:17.638347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:51.594 [2024-05-15 19:26:17.712615] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.594 [2024-05-15 19:26:17.712653] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.594 [2024-05-15 19:26:17.712660] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.594 [2024-05-15 19:26:17.712667] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.594 [2024-05-15 19:26:17.712672] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.594 [2024-05-15 19:26:17.712786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.594 [2024-05-15 19:26:17.712819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.594 [2024-05-15 19:26:17.712820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.535 19:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:52.535 19:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:10:52.535 19:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:52.535 19:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:52.535 19:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.535 19:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.535 19:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:52.535 19:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:52.535 [2024-05-15 19:26:18.665540] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.535 19:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:52.795 19:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.055 [2024-05-15 19:26:19.094999] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:53.055 [2024-05-15 19:26:19.095220] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.055 19:26:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:53.315 19:26:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:53.575 Malloc0 00:10:53.575 19:26:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:53.836 Delay0 00:10:53.836 19:26:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.836 19:26:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:54.096 NULL1 00:10:54.096 19:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:54.356 19:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:54.356 19:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3456948 00:10:54.356 19:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:10:54.356 19:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.356 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.617 19:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.879 19:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:54.879 19:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:55.140 true 00:10:55.140 19:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:10:55.140 19:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.140 19:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.401 19:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:55.401 19:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:55.662 true 00:10:55.662 19:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:10:55.662 19:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.923 19:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.184 19:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:56.184 19:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:56.445 true 00:10:56.445 19:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:10:56.445 19:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.707 19:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.707 19:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:56.707 19:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:56.967 true 00:10:56.967 19:26:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:10:56.967 19:26:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.227 19:26:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.487 19:26:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:57.487 19:26:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:57.747 true 00:10:57.747 19:26:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:10:57.747 19:26:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.009 19:26:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.009 19:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:58.009 19:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:58.271 true 00:10:58.271 19:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:10:58.271 19:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.531 19:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.792 19:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:58.792 19:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:59.052 true 00:10:59.052 19:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:10:59.052 19:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.313 19:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:59.313 19:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:59.313 19:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:59.573 true 00:10:59.573 19:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:10:59.573 19:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.833 19:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.094 19:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:00.094 19:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:00.355 true 00:11:00.355 19:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:00.355 19:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.615 19:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.875 19:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:00.875 19:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:00.875 true 00:11:00.875 19:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:00.875 19:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.135 19:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.395 19:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:01.395 19:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:01.655 true 00:11:01.655 19:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:01.655 19:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.915 19:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.176 19:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:02.176 19:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:02.176 true 00:11:02.176 19:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:02.176 19:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.437 19:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.697 19:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:02.697 19:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:02.956 true 00:11:02.956 19:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:02.956 19:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.216 19:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.475 19:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:03.475 19:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:03.475 true 00:11:03.735 19:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:03.735 19:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.735 19:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.995 19:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:03.995 19:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:04.257 true 00:11:04.257 19:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:04.257 19:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.521 19:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.521 19:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:04.521 19:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:04.850 true 00:11:04.850 19:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:04.850 19:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.133 19:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.133 19:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:05.133 19:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:05.393 true 00:11:05.393 19:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:05.393 19:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.654 19:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.914 19:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:05.914 19:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:05.914 true 00:11:06.175 19:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:06.175 19:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.175 19:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.435 19:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:06.435 19:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:06.695 true 00:11:06.695 19:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:06.695 19:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.695 19:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.956 19:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:06.956 19:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:07.216 true 00:11:07.216 19:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:07.216 19:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.476 19:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.735 19:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:07.736 19:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:07.736 true 00:11:07.736 19:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:07.736 19:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.996 19:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.256 19:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:08.256 19:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:08.516 true 00:11:08.516 19:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:08.516 19:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.777 19:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.038 19:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:09.038 19:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:09.038 true 00:11:09.038 19:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:09.038 19:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.299 19:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.559 19:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:09.559 19:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:09.819 true 00:11:09.819 19:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:09.819 19:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.080 19:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.343 19:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:10.343 19:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:10.343 true 00:11:10.602 19:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:10.602 19:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.602 19:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.862 19:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:10.862 19:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:11.121 true 00:11:11.121 19:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:11.121 19:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.382 19:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:11.643 19:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:11.643 19:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:11.643 true 00:11:11.643 19:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:11.643 19:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.903 19:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.164 19:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:12.164 19:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:12.425 true 00:11:12.425 19:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:12.425 19:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.686 19:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.686 19:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:12.686 19:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:12.945 true 00:11:12.945 19:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:12.945 19:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.204 19:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:13.463 19:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:11:13.463 19:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:13.463 true 00:11:13.463 19:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:13.463 19:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.724 19:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:13.984 19:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:11:13.984 19:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:11:14.244 true 00:11:14.244 19:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:14.244 19:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.504 19:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.763 19:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:11:14.763 19:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:11:14.763 true 00:11:14.763 19:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:14.763 19:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.024 19:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:15.284 19:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:11:15.284 19:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:11:15.544 true 00:11:15.544 19:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:15.544 19:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.805 19:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:16.065 19:26:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:11:16.065 19:26:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:11:16.065 true 00:11:16.065 19:26:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:16.065 19:26:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.326 19:26:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:16.586 19:26:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:11:16.586 19:26:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:11:16.854 true 00:11:16.854 19:26:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:16.855 19:26:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.855 19:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.122 19:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:11:17.122 19:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:11:17.382 true 00:11:17.382 19:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:17.382 19:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.642 19:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.902 19:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:11:17.902 19:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:11:18.162 true 00:11:18.162 19:26:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:18.162 19:26:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.162 19:26:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.422 19:26:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:11:18.422 19:26:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:11:18.682 true 00:11:18.682 19:26:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:18.682 19:26:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.943 19:26:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.204 19:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:11:19.204 19:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:11:19.204 true 00:11:19.464 19:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:19.464 19:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.464 19:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.724 19:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:11:19.724 19:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:11:19.985 true 00:11:19.985 19:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:19.985 19:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.246 19:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:20.506 19:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:11:20.506 19:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:11:20.766 true 00:11:20.766 19:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:20.766 19:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.766 19:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.026 19:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:11:21.026 19:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:11:21.287 true 00:11:21.287 19:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:21.287 19:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.547 19:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.808 19:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:11:21.808 19:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:11:22.068 true 00:11:22.068 19:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:22.068 19:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.329 19:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.329 19:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:11:22.329 19:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:11:22.590 true 00:11:22.590 19:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:22.590 19:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.850 19:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.110 19:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:11:23.110 19:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:11:23.369 true 00:11:23.369 19:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:23.369 19:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.369 19:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.629 19:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:11:23.629 19:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:11:23.890 true 00:11:23.890 19:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:23.890 19:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.150 19:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.410 19:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:11:24.410 19:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:11:24.410 true 00:11:24.670 19:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:24.670 19:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.670 Initializing NVMe Controllers 00:11:24.670 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:24.670 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:11:24.670 Controller IO queue size 128, less than required. 00:11:24.670 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:24.670 WARNING: Some requested NVMe devices were skipped 00:11:24.670 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:24.670 Initialization complete. Launching workers. 00:11:24.670 ======================================================== 00:11:24.670 Latency(us) 00:11:24.670 Device Information : IOPS MiB/s Average min max 00:11:24.670 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 21191.88 10.35 6039.97 2558.01 11180.07 00:11:24.670 ======================================================== 00:11:24.670 Total : 21191.88 10.35 6039.97 2558.01 11180.07 00:11:24.670 00:11:24.670 19:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.930 19:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:11:24.930 19:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:11:25.191 true 00:11:25.191 19:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3456948 00:11:25.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3456948) - No such process 00:11:25.191 19:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3456948 00:11:25.191 19:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.452 19:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:25.712 19:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:25.713 19:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:25.713 19:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:25.713 19:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:25.713 19:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:25.713 null0 00:11:25.713 19:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:25.713 19:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:25.713 19:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:25.973 null1 00:11:25.973 19:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:25.973 19:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:25.973 19:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:26.233 null2 00:11:26.233 19:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:26.233 19:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:26.233 19:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:26.233 null3 00:11:26.233 19:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:26.233 19:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:26.233 19:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:26.493 null4 00:11:26.493 19:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:26.493 19:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:26.493 19:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:26.753 null5 00:11:26.753 19:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:26.753 19:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:26.753 19:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:27.013 null6 00:11:27.013 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:27.013 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:27.013 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:27.275 null7 00:11:27.275 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:27.275 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:27.275 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:27.275 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:27.275 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:27.275 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:27.275 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:27.275 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:27.275 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:27.275 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:27.275 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.275 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:27.275 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:27.275 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:27.275 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:27.275 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:27.275 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:27.275 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3463579 3463581 3463584 3463587 3463590 3463593 3463595 3463598 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:27.276 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:27.559 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:27.559 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:27.559 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.559 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.559 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:27.559 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.560 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.560 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:27.560 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.560 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.560 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:27.560 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.560 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.560 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:27.560 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.560 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.560 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:27.560 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.560 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.560 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:27.560 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.560 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.560 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:27.560 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.560 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.560 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:27.820 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:27.820 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:27.820 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.820 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:27.820 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:27.820 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:27.820 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:27.820 19:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:28.081 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.081 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.081 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:28.081 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.081 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.081 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:28.081 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.081 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.081 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.081 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:28.081 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.081 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:28.081 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.081 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.081 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.082 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.082 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:28.082 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:28.082 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.082 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.082 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:28.082 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.082 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.082 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.343 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:28.605 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.605 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.605 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:28.605 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:28.605 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:28.605 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:28.605 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:28.605 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:28.605 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.605 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:28.605 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.866 19:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:29.127 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:29.127 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:29.127 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:29.127 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.127 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:29.127 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:29.127 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:29.127 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:29.387 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.648 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:29.909 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.909 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.909 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:29.909 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:29.909 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:29.909 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:29.909 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.909 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:29.909 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:29.909 19:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:29.909 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.169 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:30.429 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:30.429 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:30.429 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:30.429 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:30.429 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:30.429 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:30.429 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.429 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:30.429 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.429 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.429 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:30.429 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.429 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.429 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:30.429 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.429 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.429 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:30.429 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.429 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.429 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:30.690 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.690 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.690 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:30.690 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.690 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.690 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:30.690 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.690 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.690 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:30.690 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.690 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.690 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:30.690 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:30.690 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:30.690 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:30.690 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:30.690 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:30.690 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.690 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:30.951 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:30.951 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.951 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.951 19:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:30.951 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.951 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.951 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:30.951 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.951 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.951 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:30.951 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.951 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.951 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:30.951 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.951 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.951 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:30.951 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.951 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.951 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:30.951 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.951 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.951 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:30.951 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.951 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.951 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:31.212 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:31.212 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.212 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:31.212 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:31.212 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:31.212 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:31.212 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:31.212 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:31.473 rmmod nvme_tcp 00:11:31.473 rmmod nvme_fabrics 00:11:31.473 rmmod nvme_keyring 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3456384 ']' 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3456384 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 3456384 ']' 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 3456384 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:31.473 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3456384 00:11:31.732 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:31.732 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:31.732 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3456384' 00:11:31.732 killing process with pid 3456384 00:11:31.732 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 3456384 00:11:31.732 [2024-05-15 19:26:57.683542] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:31.732 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 3456384 00:11:31.732 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:31.732 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:31.732 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:31.732 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:31.732 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:31.732 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.732 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:31.732 19:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.292 19:26:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:34.292 00:11:34.292 real 0m50.826s 00:11:34.292 user 3m27.871s 00:11:34.292 sys 0m18.215s 00:11:34.292 19:26:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:34.292 19:26:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.292 ************************************ 00:11:34.292 END TEST nvmf_ns_hotplug_stress 00:11:34.292 ************************************ 00:11:34.292 19:26:59 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:34.292 19:26:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:34.292 19:26:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:34.292 19:26:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:34.292 ************************************ 00:11:34.292 START TEST nvmf_connect_stress 00:11:34.292 ************************************ 00:11:34.292 19:26:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:34.292 * Looking for test storage... 00:11:34.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:34.292 19:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:42.434 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:42.434 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:42.434 Found net devices under 0000:31:00.0: cvl_0_0 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:42.434 Found net devices under 0000:31:00.1: cvl_0_1 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:42.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.726 ms 00:11:42.434 00:11:42.434 --- 10.0.0.2 ping statistics --- 00:11:42.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.434 rtt min/avg/max/mdev = 0.726/0.726/0.726/0.000 ms 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.379 ms 00:11:42.434 00:11:42.434 --- 10.0.0.1 ping statistics --- 00:11:42.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.434 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3469681 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3469681 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 3469681 ']' 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:42.434 19:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.694 [2024-05-15 19:27:08.637971] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:11:42.694 [2024-05-15 19:27:08.638033] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.694 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.694 [2024-05-15 19:27:08.733649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:42.694 [2024-05-15 19:27:08.825475] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.694 [2024-05-15 19:27:08.825529] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.694 [2024-05-15 19:27:08.825542] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.694 [2024-05-15 19:27:08.825553] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.694 [2024-05-15 19:27:08.825562] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.694 [2024-05-15 19:27:08.825710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.694 [2024-05-15 19:27:08.825863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.694 [2024-05-15 19:27:08.825867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.637 [2024-05-15 19:27:09.606865] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.637 [2024-05-15 19:27:09.631071] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:43.637 [2024-05-15 19:27:09.631291] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.637 NULL1 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3470192 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:43.637 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.637 19:27:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.897 19:27:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.897 19:27:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:43.897 19:27:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.897 19:27:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.897 19:27:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.468 19:27:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.468 19:27:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:44.468 19:27:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.468 19:27:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.468 19:27:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.728 19:27:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.728 19:27:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:44.728 19:27:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.728 19:27:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.728 19:27:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.987 19:27:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.987 19:27:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:44.987 19:27:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.987 19:27:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.987 19:27:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.247 19:27:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.247 19:27:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:45.247 19:27:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.247 19:27:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.247 19:27:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.817 19:27:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.817 19:27:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:45.817 19:27:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.817 19:27:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.817 19:27:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.076 19:27:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.076 19:27:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:46.076 19:27:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.076 19:27:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.076 19:27:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.337 19:27:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.337 19:27:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:46.337 19:27:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.337 19:27:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.337 19:27:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.597 19:27:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.597 19:27:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:46.597 19:27:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.597 19:27:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.597 19:27:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.857 19:27:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.857 19:27:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:46.857 19:27:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.857 19:27:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.857 19:27:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.428 19:27:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.428 19:27:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:47.428 19:27:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.428 19:27:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.428 19:27:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.689 19:27:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.689 19:27:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:47.689 19:27:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.689 19:27:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.689 19:27:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.950 19:27:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.950 19:27:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:47.950 19:27:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.950 19:27:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.950 19:27:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.211 19:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.211 19:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:48.211 19:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.211 19:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.211 19:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.471 19:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.471 19:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:48.471 19:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.471 19:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.471 19:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.041 19:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.041 19:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:49.041 19:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.041 19:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.041 19:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.302 19:27:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.302 19:27:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:49.302 19:27:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.302 19:27:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.302 19:27:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.563 19:27:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.563 19:27:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:49.563 19:27:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.563 19:27:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.563 19:27:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.823 19:27:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.823 19:27:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:49.823 19:27:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.823 19:27:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.823 19:27:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.084 19:27:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.084 19:27:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:50.084 19:27:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.084 19:27:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.084 19:27:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.656 19:27:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.656 19:27:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:50.656 19:27:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.656 19:27:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.656 19:27:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.916 19:27:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.916 19:27:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:50.916 19:27:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.916 19:27:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.916 19:27:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.177 19:27:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.177 19:27:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:51.177 19:27:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.177 19:27:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.177 19:27:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.438 19:27:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.438 19:27:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:51.438 19:27:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.438 19:27:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.438 19:27:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.009 19:27:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.009 19:27:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:52.009 19:27:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.009 19:27:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.009 19:27:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.269 19:27:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.270 19:27:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:52.270 19:27:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.270 19:27:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.270 19:27:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.531 19:27:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.531 19:27:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:52.531 19:27:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.531 19:27:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.531 19:27:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.791 19:27:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.791 19:27:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:52.791 19:27:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.791 19:27:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.791 19:27:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.051 19:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.051 19:27:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:53.051 19:27:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.051 19:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.051 19:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.623 19:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.623 19:27:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:53.623 19:27:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.623 19:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.623 19:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.623 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3470192 00:11:53.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3470192) - No such process 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3470192 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:53.884 rmmod nvme_tcp 00:11:53.884 rmmod nvme_fabrics 00:11:53.884 rmmod nvme_keyring 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3469681 ']' 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3469681 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 3469681 ']' 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 3469681 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3469681 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3469681' 00:11:53.884 killing process with pid 3469681 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 3469681 00:11:53.884 [2024-05-15 19:27:19.983640] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:53.884 19:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 3469681 00:11:54.146 19:27:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:54.146 19:27:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:54.146 19:27:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:54.146 19:27:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:54.146 19:27:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:54.146 19:27:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.146 19:27:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:54.146 19:27:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.061 19:27:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:56.061 00:11:56.061 real 0m22.220s 00:11:56.061 user 0m42.840s 00:11:56.061 sys 0m9.697s 00:11:56.061 19:27:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:56.061 19:27:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.061 ************************************ 00:11:56.061 END TEST nvmf_connect_stress 00:11:56.061 ************************************ 00:11:56.061 19:27:22 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:56.061 19:27:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:56.061 19:27:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:56.061 19:27:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:56.322 ************************************ 00:11:56.322 START TEST nvmf_fused_ordering 00:11:56.323 ************************************ 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:56.323 * Looking for test storage... 00:11:56.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:56.323 19:27:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:04.521 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:04.521 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:04.521 Found net devices under 0000:31:00.0: cvl_0_0 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:04.521 Found net devices under 0000:31:00.1: cvl_0_1 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:04.521 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:04.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:12:04.783 00:12:04.783 --- 10.0.0.2 ping statistics --- 00:12:04.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.783 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:04.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:12:04.783 00:12:04.783 --- 10.0.0.1 ping statistics --- 00:12:04.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.783 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3476935 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3476935 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 3476935 ']' 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:04.783 19:27:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:04.783 [2024-05-15 19:27:30.911645] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:12:04.783 [2024-05-15 19:27:30.911709] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.783 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.044 [2024-05-15 19:27:30.989203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.045 [2024-05-15 19:27:31.061493] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.045 [2024-05-15 19:27:31.061529] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.045 [2024-05-15 19:27:31.061537] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.045 [2024-05-15 19:27:31.061543] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.045 [2024-05-15 19:27:31.061548] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.045 [2024-05-15 19:27:31.061566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.616 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:05.616 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:12:05.616 19:27:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:05.616 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:05.616 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:05.877 [2024-05-15 19:27:31.812666] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:05.877 [2024-05-15 19:27:31.836671] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:05.877 [2024-05-15 19:27:31.836853] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:05.877 NULL1 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.877 19:27:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:05.877 [2024-05-15 19:27:31.903006] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:12:05.877 [2024-05-15 19:27:31.903045] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3477281 ] 00:12:05.877 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.448 Attached to nqn.2016-06.io.spdk:cnode1 00:12:06.448 Namespace ID: 1 size: 1GB 00:12:06.448 fused_ordering(0) 00:12:06.448 fused_ordering(1) 00:12:06.448 fused_ordering(2) 00:12:06.448 fused_ordering(3) 00:12:06.448 fused_ordering(4) 00:12:06.448 fused_ordering(5) 00:12:06.448 fused_ordering(6) 00:12:06.448 fused_ordering(7) 00:12:06.448 fused_ordering(8) 00:12:06.448 fused_ordering(9) 00:12:06.448 fused_ordering(10) 00:12:06.448 fused_ordering(11) 00:12:06.448 fused_ordering(12) 00:12:06.448 fused_ordering(13) 00:12:06.448 fused_ordering(14) 00:12:06.448 fused_ordering(15) 00:12:06.448 fused_ordering(16) 00:12:06.448 fused_ordering(17) 00:12:06.448 fused_ordering(18) 00:12:06.448 fused_ordering(19) 00:12:06.448 fused_ordering(20) 00:12:06.448 fused_ordering(21) 00:12:06.448 fused_ordering(22) 00:12:06.448 fused_ordering(23) 00:12:06.448 fused_ordering(24) 00:12:06.448 fused_ordering(25) 00:12:06.448 fused_ordering(26) 00:12:06.448 fused_ordering(27) 00:12:06.448 fused_ordering(28) 00:12:06.448 fused_ordering(29) 00:12:06.448 fused_ordering(30) 00:12:06.448 fused_ordering(31) 00:12:06.448 fused_ordering(32) 00:12:06.448 fused_ordering(33) 00:12:06.448 fused_ordering(34) 00:12:06.448 fused_ordering(35) 00:12:06.448 fused_ordering(36) 00:12:06.448 fused_ordering(37) 00:12:06.448 fused_ordering(38) 00:12:06.448 fused_ordering(39) 00:12:06.448 fused_ordering(40) 00:12:06.448 fused_ordering(41) 00:12:06.448 fused_ordering(42) 00:12:06.448 fused_ordering(43) 00:12:06.448 fused_ordering(44) 00:12:06.448 fused_ordering(45) 00:12:06.448 fused_ordering(46) 00:12:06.448 fused_ordering(47) 00:12:06.448 fused_ordering(48) 00:12:06.448 fused_ordering(49) 00:12:06.448 fused_ordering(50) 00:12:06.448 fused_ordering(51) 00:12:06.448 fused_ordering(52) 00:12:06.448 fused_ordering(53) 00:12:06.448 fused_ordering(54) 00:12:06.448 fused_ordering(55) 00:12:06.448 fused_ordering(56) 00:12:06.448 fused_ordering(57) 00:12:06.448 fused_ordering(58) 00:12:06.448 fused_ordering(59) 00:12:06.448 fused_ordering(60) 00:12:06.448 fused_ordering(61) 00:12:06.448 fused_ordering(62) 00:12:06.448 fused_ordering(63) 00:12:06.448 fused_ordering(64) 00:12:06.448 fused_ordering(65) 00:12:06.448 fused_ordering(66) 00:12:06.448 fused_ordering(67) 00:12:06.448 fused_ordering(68) 00:12:06.448 fused_ordering(69) 00:12:06.448 fused_ordering(70) 00:12:06.448 fused_ordering(71) 00:12:06.448 fused_ordering(72) 00:12:06.448 fused_ordering(73) 00:12:06.448 fused_ordering(74) 00:12:06.448 fused_ordering(75) 00:12:06.448 fused_ordering(76) 00:12:06.448 fused_ordering(77) 00:12:06.448 fused_ordering(78) 00:12:06.449 fused_ordering(79) 00:12:06.449 fused_ordering(80) 00:12:06.449 fused_ordering(81) 00:12:06.449 fused_ordering(82) 00:12:06.449 fused_ordering(83) 00:12:06.449 fused_ordering(84) 00:12:06.449 fused_ordering(85) 00:12:06.449 fused_ordering(86) 00:12:06.449 fused_ordering(87) 00:12:06.449 fused_ordering(88) 00:12:06.449 fused_ordering(89) 00:12:06.449 fused_ordering(90) 00:12:06.449 fused_ordering(91) 00:12:06.449 fused_ordering(92) 00:12:06.449 fused_ordering(93) 00:12:06.449 fused_ordering(94) 00:12:06.449 fused_ordering(95) 00:12:06.449 fused_ordering(96) 00:12:06.449 fused_ordering(97) 00:12:06.449 fused_ordering(98) 00:12:06.449 fused_ordering(99) 00:12:06.449 fused_ordering(100) 00:12:06.449 fused_ordering(101) 00:12:06.449 fused_ordering(102) 00:12:06.449 fused_ordering(103) 00:12:06.449 fused_ordering(104) 00:12:06.449 fused_ordering(105) 00:12:06.449 fused_ordering(106) 00:12:06.449 fused_ordering(107) 00:12:06.449 fused_ordering(108) 00:12:06.449 fused_ordering(109) 00:12:06.449 fused_ordering(110) 00:12:06.449 fused_ordering(111) 00:12:06.449 fused_ordering(112) 00:12:06.449 fused_ordering(113) 00:12:06.449 fused_ordering(114) 00:12:06.449 fused_ordering(115) 00:12:06.449 fused_ordering(116) 00:12:06.449 fused_ordering(117) 00:12:06.449 fused_ordering(118) 00:12:06.449 fused_ordering(119) 00:12:06.449 fused_ordering(120) 00:12:06.449 fused_ordering(121) 00:12:06.449 fused_ordering(122) 00:12:06.449 fused_ordering(123) 00:12:06.449 fused_ordering(124) 00:12:06.449 fused_ordering(125) 00:12:06.449 fused_ordering(126) 00:12:06.449 fused_ordering(127) 00:12:06.449 fused_ordering(128) 00:12:06.449 fused_ordering(129) 00:12:06.449 fused_ordering(130) 00:12:06.449 fused_ordering(131) 00:12:06.449 fused_ordering(132) 00:12:06.449 fused_ordering(133) 00:12:06.449 fused_ordering(134) 00:12:06.449 fused_ordering(135) 00:12:06.449 fused_ordering(136) 00:12:06.449 fused_ordering(137) 00:12:06.449 fused_ordering(138) 00:12:06.449 fused_ordering(139) 00:12:06.449 fused_ordering(140) 00:12:06.449 fused_ordering(141) 00:12:06.449 fused_ordering(142) 00:12:06.449 fused_ordering(143) 00:12:06.449 fused_ordering(144) 00:12:06.449 fused_ordering(145) 00:12:06.449 fused_ordering(146) 00:12:06.449 fused_ordering(147) 00:12:06.449 fused_ordering(148) 00:12:06.449 fused_ordering(149) 00:12:06.449 fused_ordering(150) 00:12:06.449 fused_ordering(151) 00:12:06.449 fused_ordering(152) 00:12:06.449 fused_ordering(153) 00:12:06.449 fused_ordering(154) 00:12:06.449 fused_ordering(155) 00:12:06.449 fused_ordering(156) 00:12:06.449 fused_ordering(157) 00:12:06.449 fused_ordering(158) 00:12:06.449 fused_ordering(159) 00:12:06.449 fused_ordering(160) 00:12:06.449 fused_ordering(161) 00:12:06.449 fused_ordering(162) 00:12:06.449 fused_ordering(163) 00:12:06.449 fused_ordering(164) 00:12:06.449 fused_ordering(165) 00:12:06.449 fused_ordering(166) 00:12:06.449 fused_ordering(167) 00:12:06.449 fused_ordering(168) 00:12:06.449 fused_ordering(169) 00:12:06.449 fused_ordering(170) 00:12:06.449 fused_ordering(171) 00:12:06.449 fused_ordering(172) 00:12:06.449 fused_ordering(173) 00:12:06.449 fused_ordering(174) 00:12:06.449 fused_ordering(175) 00:12:06.449 fused_ordering(176) 00:12:06.449 fused_ordering(177) 00:12:06.449 fused_ordering(178) 00:12:06.449 fused_ordering(179) 00:12:06.449 fused_ordering(180) 00:12:06.449 fused_ordering(181) 00:12:06.449 fused_ordering(182) 00:12:06.449 fused_ordering(183) 00:12:06.449 fused_ordering(184) 00:12:06.449 fused_ordering(185) 00:12:06.449 fused_ordering(186) 00:12:06.449 fused_ordering(187) 00:12:06.449 fused_ordering(188) 00:12:06.449 fused_ordering(189) 00:12:06.449 fused_ordering(190) 00:12:06.449 fused_ordering(191) 00:12:06.449 fused_ordering(192) 00:12:06.449 fused_ordering(193) 00:12:06.449 fused_ordering(194) 00:12:06.449 fused_ordering(195) 00:12:06.449 fused_ordering(196) 00:12:06.449 fused_ordering(197) 00:12:06.449 fused_ordering(198) 00:12:06.449 fused_ordering(199) 00:12:06.449 fused_ordering(200) 00:12:06.449 fused_ordering(201) 00:12:06.449 fused_ordering(202) 00:12:06.449 fused_ordering(203) 00:12:06.449 fused_ordering(204) 00:12:06.449 fused_ordering(205) 00:12:06.709 fused_ordering(206) 00:12:06.709 fused_ordering(207) 00:12:06.709 fused_ordering(208) 00:12:06.709 fused_ordering(209) 00:12:06.709 fused_ordering(210) 00:12:06.709 fused_ordering(211) 00:12:06.709 fused_ordering(212) 00:12:06.709 fused_ordering(213) 00:12:06.709 fused_ordering(214) 00:12:06.709 fused_ordering(215) 00:12:06.709 fused_ordering(216) 00:12:06.709 fused_ordering(217) 00:12:06.709 fused_ordering(218) 00:12:06.709 fused_ordering(219) 00:12:06.709 fused_ordering(220) 00:12:06.709 fused_ordering(221) 00:12:06.709 fused_ordering(222) 00:12:06.709 fused_ordering(223) 00:12:06.709 fused_ordering(224) 00:12:06.709 fused_ordering(225) 00:12:06.709 fused_ordering(226) 00:12:06.709 fused_ordering(227) 00:12:06.709 fused_ordering(228) 00:12:06.709 fused_ordering(229) 00:12:06.709 fused_ordering(230) 00:12:06.709 fused_ordering(231) 00:12:06.709 fused_ordering(232) 00:12:06.709 fused_ordering(233) 00:12:06.709 fused_ordering(234) 00:12:06.709 fused_ordering(235) 00:12:06.709 fused_ordering(236) 00:12:06.709 fused_ordering(237) 00:12:06.709 fused_ordering(238) 00:12:06.709 fused_ordering(239) 00:12:06.709 fused_ordering(240) 00:12:06.709 fused_ordering(241) 00:12:06.709 fused_ordering(242) 00:12:06.709 fused_ordering(243) 00:12:06.709 fused_ordering(244) 00:12:06.709 fused_ordering(245) 00:12:06.709 fused_ordering(246) 00:12:06.709 fused_ordering(247) 00:12:06.709 fused_ordering(248) 00:12:06.709 fused_ordering(249) 00:12:06.709 fused_ordering(250) 00:12:06.709 fused_ordering(251) 00:12:06.709 fused_ordering(252) 00:12:06.709 fused_ordering(253) 00:12:06.709 fused_ordering(254) 00:12:06.709 fused_ordering(255) 00:12:06.709 fused_ordering(256) 00:12:06.709 fused_ordering(257) 00:12:06.709 fused_ordering(258) 00:12:06.709 fused_ordering(259) 00:12:06.709 fused_ordering(260) 00:12:06.709 fused_ordering(261) 00:12:06.709 fused_ordering(262) 00:12:06.709 fused_ordering(263) 00:12:06.709 fused_ordering(264) 00:12:06.709 fused_ordering(265) 00:12:06.709 fused_ordering(266) 00:12:06.709 fused_ordering(267) 00:12:06.709 fused_ordering(268) 00:12:06.709 fused_ordering(269) 00:12:06.709 fused_ordering(270) 00:12:06.709 fused_ordering(271) 00:12:06.709 fused_ordering(272) 00:12:06.709 fused_ordering(273) 00:12:06.709 fused_ordering(274) 00:12:06.709 fused_ordering(275) 00:12:06.709 fused_ordering(276) 00:12:06.709 fused_ordering(277) 00:12:06.709 fused_ordering(278) 00:12:06.709 fused_ordering(279) 00:12:06.709 fused_ordering(280) 00:12:06.709 fused_ordering(281) 00:12:06.709 fused_ordering(282) 00:12:06.709 fused_ordering(283) 00:12:06.709 fused_ordering(284) 00:12:06.709 fused_ordering(285) 00:12:06.709 fused_ordering(286) 00:12:06.709 fused_ordering(287) 00:12:06.709 fused_ordering(288) 00:12:06.709 fused_ordering(289) 00:12:06.709 fused_ordering(290) 00:12:06.709 fused_ordering(291) 00:12:06.709 fused_ordering(292) 00:12:06.709 fused_ordering(293) 00:12:06.709 fused_ordering(294) 00:12:06.709 fused_ordering(295) 00:12:06.709 fused_ordering(296) 00:12:06.709 fused_ordering(297) 00:12:06.709 fused_ordering(298) 00:12:06.709 fused_ordering(299) 00:12:06.709 fused_ordering(300) 00:12:06.709 fused_ordering(301) 00:12:06.710 fused_ordering(302) 00:12:06.710 fused_ordering(303) 00:12:06.710 fused_ordering(304) 00:12:06.710 fused_ordering(305) 00:12:06.710 fused_ordering(306) 00:12:06.710 fused_ordering(307) 00:12:06.710 fused_ordering(308) 00:12:06.710 fused_ordering(309) 00:12:06.710 fused_ordering(310) 00:12:06.710 fused_ordering(311) 00:12:06.710 fused_ordering(312) 00:12:06.710 fused_ordering(313) 00:12:06.710 fused_ordering(314) 00:12:06.710 fused_ordering(315) 00:12:06.710 fused_ordering(316) 00:12:06.710 fused_ordering(317) 00:12:06.710 fused_ordering(318) 00:12:06.710 fused_ordering(319) 00:12:06.710 fused_ordering(320) 00:12:06.710 fused_ordering(321) 00:12:06.710 fused_ordering(322) 00:12:06.710 fused_ordering(323) 00:12:06.710 fused_ordering(324) 00:12:06.710 fused_ordering(325) 00:12:06.710 fused_ordering(326) 00:12:06.710 fused_ordering(327) 00:12:06.710 fused_ordering(328) 00:12:06.710 fused_ordering(329) 00:12:06.710 fused_ordering(330) 00:12:06.710 fused_ordering(331) 00:12:06.710 fused_ordering(332) 00:12:06.710 fused_ordering(333) 00:12:06.710 fused_ordering(334) 00:12:06.710 fused_ordering(335) 00:12:06.710 fused_ordering(336) 00:12:06.710 fused_ordering(337) 00:12:06.710 fused_ordering(338) 00:12:06.710 fused_ordering(339) 00:12:06.710 fused_ordering(340) 00:12:06.710 fused_ordering(341) 00:12:06.710 fused_ordering(342) 00:12:06.710 fused_ordering(343) 00:12:06.710 fused_ordering(344) 00:12:06.710 fused_ordering(345) 00:12:06.710 fused_ordering(346) 00:12:06.710 fused_ordering(347) 00:12:06.710 fused_ordering(348) 00:12:06.710 fused_ordering(349) 00:12:06.710 fused_ordering(350) 00:12:06.710 fused_ordering(351) 00:12:06.710 fused_ordering(352) 00:12:06.710 fused_ordering(353) 00:12:06.710 fused_ordering(354) 00:12:06.710 fused_ordering(355) 00:12:06.710 fused_ordering(356) 00:12:06.710 fused_ordering(357) 00:12:06.710 fused_ordering(358) 00:12:06.710 fused_ordering(359) 00:12:06.710 fused_ordering(360) 00:12:06.710 fused_ordering(361) 00:12:06.710 fused_ordering(362) 00:12:06.710 fused_ordering(363) 00:12:06.710 fused_ordering(364) 00:12:06.710 fused_ordering(365) 00:12:06.710 fused_ordering(366) 00:12:06.710 fused_ordering(367) 00:12:06.710 fused_ordering(368) 00:12:06.710 fused_ordering(369) 00:12:06.710 fused_ordering(370) 00:12:06.710 fused_ordering(371) 00:12:06.710 fused_ordering(372) 00:12:06.710 fused_ordering(373) 00:12:06.710 fused_ordering(374) 00:12:06.710 fused_ordering(375) 00:12:06.710 fused_ordering(376) 00:12:06.710 fused_ordering(377) 00:12:06.710 fused_ordering(378) 00:12:06.710 fused_ordering(379) 00:12:06.710 fused_ordering(380) 00:12:06.710 fused_ordering(381) 00:12:06.710 fused_ordering(382) 00:12:06.710 fused_ordering(383) 00:12:06.710 fused_ordering(384) 00:12:06.710 fused_ordering(385) 00:12:06.710 fused_ordering(386) 00:12:06.710 fused_ordering(387) 00:12:06.710 fused_ordering(388) 00:12:06.710 fused_ordering(389) 00:12:06.710 fused_ordering(390) 00:12:06.710 fused_ordering(391) 00:12:06.710 fused_ordering(392) 00:12:06.710 fused_ordering(393) 00:12:06.710 fused_ordering(394) 00:12:06.710 fused_ordering(395) 00:12:06.710 fused_ordering(396) 00:12:06.710 fused_ordering(397) 00:12:06.710 fused_ordering(398) 00:12:06.710 fused_ordering(399) 00:12:06.710 fused_ordering(400) 00:12:06.710 fused_ordering(401) 00:12:06.710 fused_ordering(402) 00:12:06.710 fused_ordering(403) 00:12:06.710 fused_ordering(404) 00:12:06.710 fused_ordering(405) 00:12:06.710 fused_ordering(406) 00:12:06.710 fused_ordering(407) 00:12:06.710 fused_ordering(408) 00:12:06.710 fused_ordering(409) 00:12:06.710 fused_ordering(410) 00:12:07.280 fused_ordering(411) 00:12:07.280 fused_ordering(412) 00:12:07.280 fused_ordering(413) 00:12:07.280 fused_ordering(414) 00:12:07.280 fused_ordering(415) 00:12:07.280 fused_ordering(416) 00:12:07.280 fused_ordering(417) 00:12:07.280 fused_ordering(418) 00:12:07.280 fused_ordering(419) 00:12:07.280 fused_ordering(420) 00:12:07.280 fused_ordering(421) 00:12:07.280 fused_ordering(422) 00:12:07.280 fused_ordering(423) 00:12:07.280 fused_ordering(424) 00:12:07.280 fused_ordering(425) 00:12:07.280 fused_ordering(426) 00:12:07.280 fused_ordering(427) 00:12:07.280 fused_ordering(428) 00:12:07.280 fused_ordering(429) 00:12:07.280 fused_ordering(430) 00:12:07.280 fused_ordering(431) 00:12:07.280 fused_ordering(432) 00:12:07.280 fused_ordering(433) 00:12:07.280 fused_ordering(434) 00:12:07.280 fused_ordering(435) 00:12:07.280 fused_ordering(436) 00:12:07.280 fused_ordering(437) 00:12:07.280 fused_ordering(438) 00:12:07.280 fused_ordering(439) 00:12:07.280 fused_ordering(440) 00:12:07.280 fused_ordering(441) 00:12:07.280 fused_ordering(442) 00:12:07.280 fused_ordering(443) 00:12:07.280 fused_ordering(444) 00:12:07.280 fused_ordering(445) 00:12:07.280 fused_ordering(446) 00:12:07.280 fused_ordering(447) 00:12:07.280 fused_ordering(448) 00:12:07.280 fused_ordering(449) 00:12:07.280 fused_ordering(450) 00:12:07.280 fused_ordering(451) 00:12:07.280 fused_ordering(452) 00:12:07.280 fused_ordering(453) 00:12:07.280 fused_ordering(454) 00:12:07.280 fused_ordering(455) 00:12:07.280 fused_ordering(456) 00:12:07.280 fused_ordering(457) 00:12:07.280 fused_ordering(458) 00:12:07.280 fused_ordering(459) 00:12:07.280 fused_ordering(460) 00:12:07.280 fused_ordering(461) 00:12:07.280 fused_ordering(462) 00:12:07.280 fused_ordering(463) 00:12:07.280 fused_ordering(464) 00:12:07.280 fused_ordering(465) 00:12:07.280 fused_ordering(466) 00:12:07.280 fused_ordering(467) 00:12:07.280 fused_ordering(468) 00:12:07.280 fused_ordering(469) 00:12:07.280 fused_ordering(470) 00:12:07.280 fused_ordering(471) 00:12:07.280 fused_ordering(472) 00:12:07.280 fused_ordering(473) 00:12:07.280 fused_ordering(474) 00:12:07.280 fused_ordering(475) 00:12:07.280 fused_ordering(476) 00:12:07.280 fused_ordering(477) 00:12:07.280 fused_ordering(478) 00:12:07.280 fused_ordering(479) 00:12:07.280 fused_ordering(480) 00:12:07.280 fused_ordering(481) 00:12:07.280 fused_ordering(482) 00:12:07.280 fused_ordering(483) 00:12:07.280 fused_ordering(484) 00:12:07.280 fused_ordering(485) 00:12:07.280 fused_ordering(486) 00:12:07.280 fused_ordering(487) 00:12:07.280 fused_ordering(488) 00:12:07.280 fused_ordering(489) 00:12:07.280 fused_ordering(490) 00:12:07.280 fused_ordering(491) 00:12:07.280 fused_ordering(492) 00:12:07.280 fused_ordering(493) 00:12:07.280 fused_ordering(494) 00:12:07.280 fused_ordering(495) 00:12:07.280 fused_ordering(496) 00:12:07.280 fused_ordering(497) 00:12:07.280 fused_ordering(498) 00:12:07.280 fused_ordering(499) 00:12:07.280 fused_ordering(500) 00:12:07.280 fused_ordering(501) 00:12:07.280 fused_ordering(502) 00:12:07.280 fused_ordering(503) 00:12:07.280 fused_ordering(504) 00:12:07.280 fused_ordering(505) 00:12:07.280 fused_ordering(506) 00:12:07.280 fused_ordering(507) 00:12:07.280 fused_ordering(508) 00:12:07.280 fused_ordering(509) 00:12:07.280 fused_ordering(510) 00:12:07.280 fused_ordering(511) 00:12:07.280 fused_ordering(512) 00:12:07.280 fused_ordering(513) 00:12:07.280 fused_ordering(514) 00:12:07.280 fused_ordering(515) 00:12:07.280 fused_ordering(516) 00:12:07.280 fused_ordering(517) 00:12:07.280 fused_ordering(518) 00:12:07.280 fused_ordering(519) 00:12:07.280 fused_ordering(520) 00:12:07.280 fused_ordering(521) 00:12:07.280 fused_ordering(522) 00:12:07.281 fused_ordering(523) 00:12:07.281 fused_ordering(524) 00:12:07.281 fused_ordering(525) 00:12:07.281 fused_ordering(526) 00:12:07.281 fused_ordering(527) 00:12:07.281 fused_ordering(528) 00:12:07.281 fused_ordering(529) 00:12:07.281 fused_ordering(530) 00:12:07.281 fused_ordering(531) 00:12:07.281 fused_ordering(532) 00:12:07.281 fused_ordering(533) 00:12:07.281 fused_ordering(534) 00:12:07.281 fused_ordering(535) 00:12:07.281 fused_ordering(536) 00:12:07.281 fused_ordering(537) 00:12:07.281 fused_ordering(538) 00:12:07.281 fused_ordering(539) 00:12:07.281 fused_ordering(540) 00:12:07.281 fused_ordering(541) 00:12:07.281 fused_ordering(542) 00:12:07.281 fused_ordering(543) 00:12:07.281 fused_ordering(544) 00:12:07.281 fused_ordering(545) 00:12:07.281 fused_ordering(546) 00:12:07.281 fused_ordering(547) 00:12:07.281 fused_ordering(548) 00:12:07.281 fused_ordering(549) 00:12:07.281 fused_ordering(550) 00:12:07.281 fused_ordering(551) 00:12:07.281 fused_ordering(552) 00:12:07.281 fused_ordering(553) 00:12:07.281 fused_ordering(554) 00:12:07.281 fused_ordering(555) 00:12:07.281 fused_ordering(556) 00:12:07.281 fused_ordering(557) 00:12:07.281 fused_ordering(558) 00:12:07.281 fused_ordering(559) 00:12:07.281 fused_ordering(560) 00:12:07.281 fused_ordering(561) 00:12:07.281 fused_ordering(562) 00:12:07.281 fused_ordering(563) 00:12:07.281 fused_ordering(564) 00:12:07.281 fused_ordering(565) 00:12:07.281 fused_ordering(566) 00:12:07.281 fused_ordering(567) 00:12:07.281 fused_ordering(568) 00:12:07.281 fused_ordering(569) 00:12:07.281 fused_ordering(570) 00:12:07.281 fused_ordering(571) 00:12:07.281 fused_ordering(572) 00:12:07.281 fused_ordering(573) 00:12:07.281 fused_ordering(574) 00:12:07.281 fused_ordering(575) 00:12:07.281 fused_ordering(576) 00:12:07.281 fused_ordering(577) 00:12:07.281 fused_ordering(578) 00:12:07.281 fused_ordering(579) 00:12:07.281 fused_ordering(580) 00:12:07.281 fused_ordering(581) 00:12:07.281 fused_ordering(582) 00:12:07.281 fused_ordering(583) 00:12:07.281 fused_ordering(584) 00:12:07.281 fused_ordering(585) 00:12:07.281 fused_ordering(586) 00:12:07.281 fused_ordering(587) 00:12:07.281 fused_ordering(588) 00:12:07.281 fused_ordering(589) 00:12:07.281 fused_ordering(590) 00:12:07.281 fused_ordering(591) 00:12:07.281 fused_ordering(592) 00:12:07.281 fused_ordering(593) 00:12:07.281 fused_ordering(594) 00:12:07.281 fused_ordering(595) 00:12:07.281 fused_ordering(596) 00:12:07.281 fused_ordering(597) 00:12:07.281 fused_ordering(598) 00:12:07.281 fused_ordering(599) 00:12:07.281 fused_ordering(600) 00:12:07.281 fused_ordering(601) 00:12:07.281 fused_ordering(602) 00:12:07.281 fused_ordering(603) 00:12:07.281 fused_ordering(604) 00:12:07.281 fused_ordering(605) 00:12:07.281 fused_ordering(606) 00:12:07.281 fused_ordering(607) 00:12:07.281 fused_ordering(608) 00:12:07.281 fused_ordering(609) 00:12:07.281 fused_ordering(610) 00:12:07.281 fused_ordering(611) 00:12:07.281 fused_ordering(612) 00:12:07.281 fused_ordering(613) 00:12:07.281 fused_ordering(614) 00:12:07.281 fused_ordering(615) 00:12:07.852 fused_ordering(616) 00:12:07.852 fused_ordering(617) 00:12:07.852 fused_ordering(618) 00:12:07.852 fused_ordering(619) 00:12:07.852 fused_ordering(620) 00:12:07.852 fused_ordering(621) 00:12:07.852 fused_ordering(622) 00:12:07.852 fused_ordering(623) 00:12:07.852 fused_ordering(624) 00:12:07.852 fused_ordering(625) 00:12:07.852 fused_ordering(626) 00:12:07.852 fused_ordering(627) 00:12:07.852 fused_ordering(628) 00:12:07.852 fused_ordering(629) 00:12:07.852 fused_ordering(630) 00:12:07.852 fused_ordering(631) 00:12:07.852 fused_ordering(632) 00:12:07.852 fused_ordering(633) 00:12:07.852 fused_ordering(634) 00:12:07.852 fused_ordering(635) 00:12:07.852 fused_ordering(636) 00:12:07.852 fused_ordering(637) 00:12:07.852 fused_ordering(638) 00:12:07.852 fused_ordering(639) 00:12:07.852 fused_ordering(640) 00:12:07.852 fused_ordering(641) 00:12:07.852 fused_ordering(642) 00:12:07.852 fused_ordering(643) 00:12:07.852 fused_ordering(644) 00:12:07.852 fused_ordering(645) 00:12:07.852 fused_ordering(646) 00:12:07.852 fused_ordering(647) 00:12:07.852 fused_ordering(648) 00:12:07.852 fused_ordering(649) 00:12:07.852 fused_ordering(650) 00:12:07.852 fused_ordering(651) 00:12:07.852 fused_ordering(652) 00:12:07.852 fused_ordering(653) 00:12:07.852 fused_ordering(654) 00:12:07.852 fused_ordering(655) 00:12:07.852 fused_ordering(656) 00:12:07.852 fused_ordering(657) 00:12:07.852 fused_ordering(658) 00:12:07.852 fused_ordering(659) 00:12:07.852 fused_ordering(660) 00:12:07.852 fused_ordering(661) 00:12:07.852 fused_ordering(662) 00:12:07.852 fused_ordering(663) 00:12:07.852 fused_ordering(664) 00:12:07.852 fused_ordering(665) 00:12:07.852 fused_ordering(666) 00:12:07.852 fused_ordering(667) 00:12:07.852 fused_ordering(668) 00:12:07.852 fused_ordering(669) 00:12:07.852 fused_ordering(670) 00:12:07.852 fused_ordering(671) 00:12:07.852 fused_ordering(672) 00:12:07.852 fused_ordering(673) 00:12:07.852 fused_ordering(674) 00:12:07.852 fused_ordering(675) 00:12:07.852 fused_ordering(676) 00:12:07.852 fused_ordering(677) 00:12:07.852 fused_ordering(678) 00:12:07.852 fused_ordering(679) 00:12:07.852 fused_ordering(680) 00:12:07.852 fused_ordering(681) 00:12:07.852 fused_ordering(682) 00:12:07.852 fused_ordering(683) 00:12:07.852 fused_ordering(684) 00:12:07.852 fused_ordering(685) 00:12:07.852 fused_ordering(686) 00:12:07.852 fused_ordering(687) 00:12:07.852 fused_ordering(688) 00:12:07.852 fused_ordering(689) 00:12:07.852 fused_ordering(690) 00:12:07.852 fused_ordering(691) 00:12:07.852 fused_ordering(692) 00:12:07.852 fused_ordering(693) 00:12:07.852 fused_ordering(694) 00:12:07.852 fused_ordering(695) 00:12:07.852 fused_ordering(696) 00:12:07.852 fused_ordering(697) 00:12:07.852 fused_ordering(698) 00:12:07.852 fused_ordering(699) 00:12:07.852 fused_ordering(700) 00:12:07.852 fused_ordering(701) 00:12:07.852 fused_ordering(702) 00:12:07.852 fused_ordering(703) 00:12:07.852 fused_ordering(704) 00:12:07.852 fused_ordering(705) 00:12:07.852 fused_ordering(706) 00:12:07.852 fused_ordering(707) 00:12:07.852 fused_ordering(708) 00:12:07.852 fused_ordering(709) 00:12:07.852 fused_ordering(710) 00:12:07.852 fused_ordering(711) 00:12:07.852 fused_ordering(712) 00:12:07.852 fused_ordering(713) 00:12:07.852 fused_ordering(714) 00:12:07.852 fused_ordering(715) 00:12:07.852 fused_ordering(716) 00:12:07.852 fused_ordering(717) 00:12:07.852 fused_ordering(718) 00:12:07.852 fused_ordering(719) 00:12:07.852 fused_ordering(720) 00:12:07.852 fused_ordering(721) 00:12:07.852 fused_ordering(722) 00:12:07.852 fused_ordering(723) 00:12:07.852 fused_ordering(724) 00:12:07.852 fused_ordering(725) 00:12:07.852 fused_ordering(726) 00:12:07.852 fused_ordering(727) 00:12:07.852 fused_ordering(728) 00:12:07.852 fused_ordering(729) 00:12:07.852 fused_ordering(730) 00:12:07.852 fused_ordering(731) 00:12:07.852 fused_ordering(732) 00:12:07.852 fused_ordering(733) 00:12:07.852 fused_ordering(734) 00:12:07.852 fused_ordering(735) 00:12:07.852 fused_ordering(736) 00:12:07.852 fused_ordering(737) 00:12:07.852 fused_ordering(738) 00:12:07.852 fused_ordering(739) 00:12:07.852 fused_ordering(740) 00:12:07.852 fused_ordering(741) 00:12:07.852 fused_ordering(742) 00:12:07.852 fused_ordering(743) 00:12:07.852 fused_ordering(744) 00:12:07.852 fused_ordering(745) 00:12:07.852 fused_ordering(746) 00:12:07.852 fused_ordering(747) 00:12:07.852 fused_ordering(748) 00:12:07.852 fused_ordering(749) 00:12:07.852 fused_ordering(750) 00:12:07.852 fused_ordering(751) 00:12:07.852 fused_ordering(752) 00:12:07.852 fused_ordering(753) 00:12:07.852 fused_ordering(754) 00:12:07.852 fused_ordering(755) 00:12:07.852 fused_ordering(756) 00:12:07.852 fused_ordering(757) 00:12:07.852 fused_ordering(758) 00:12:07.852 fused_ordering(759) 00:12:07.852 fused_ordering(760) 00:12:07.852 fused_ordering(761) 00:12:07.852 fused_ordering(762) 00:12:07.852 fused_ordering(763) 00:12:07.852 fused_ordering(764) 00:12:07.852 fused_ordering(765) 00:12:07.853 fused_ordering(766) 00:12:07.853 fused_ordering(767) 00:12:07.853 fused_ordering(768) 00:12:07.853 fused_ordering(769) 00:12:07.853 fused_ordering(770) 00:12:07.853 fused_ordering(771) 00:12:07.853 fused_ordering(772) 00:12:07.853 fused_ordering(773) 00:12:07.853 fused_ordering(774) 00:12:07.853 fused_ordering(775) 00:12:07.853 fused_ordering(776) 00:12:07.853 fused_ordering(777) 00:12:07.853 fused_ordering(778) 00:12:07.853 fused_ordering(779) 00:12:07.853 fused_ordering(780) 00:12:07.853 fused_ordering(781) 00:12:07.853 fused_ordering(782) 00:12:07.853 fused_ordering(783) 00:12:07.853 fused_ordering(784) 00:12:07.853 fused_ordering(785) 00:12:07.853 fused_ordering(786) 00:12:07.853 fused_ordering(787) 00:12:07.853 fused_ordering(788) 00:12:07.853 fused_ordering(789) 00:12:07.853 fused_ordering(790) 00:12:07.853 fused_ordering(791) 00:12:07.853 fused_ordering(792) 00:12:07.853 fused_ordering(793) 00:12:07.853 fused_ordering(794) 00:12:07.853 fused_ordering(795) 00:12:07.853 fused_ordering(796) 00:12:07.853 fused_ordering(797) 00:12:07.853 fused_ordering(798) 00:12:07.853 fused_ordering(799) 00:12:07.853 fused_ordering(800) 00:12:07.853 fused_ordering(801) 00:12:07.853 fused_ordering(802) 00:12:07.853 fused_ordering(803) 00:12:07.853 fused_ordering(804) 00:12:07.853 fused_ordering(805) 00:12:07.853 fused_ordering(806) 00:12:07.853 fused_ordering(807) 00:12:07.853 fused_ordering(808) 00:12:07.853 fused_ordering(809) 00:12:07.853 fused_ordering(810) 00:12:07.853 fused_ordering(811) 00:12:07.853 fused_ordering(812) 00:12:07.853 fused_ordering(813) 00:12:07.853 fused_ordering(814) 00:12:07.853 fused_ordering(815) 00:12:07.853 fused_ordering(816) 00:12:07.853 fused_ordering(817) 00:12:07.853 fused_ordering(818) 00:12:07.853 fused_ordering(819) 00:12:07.853 fused_ordering(820) 00:12:08.425 fused_ordering(821) 00:12:08.425 fused_ordering(822) 00:12:08.425 fused_ordering(823) 00:12:08.425 fused_ordering(824) 00:12:08.425 fused_ordering(825) 00:12:08.425 fused_ordering(826) 00:12:08.425 fused_ordering(827) 00:12:08.425 fused_ordering(828) 00:12:08.425 fused_ordering(829) 00:12:08.425 fused_ordering(830) 00:12:08.425 fused_ordering(831) 00:12:08.425 fused_ordering(832) 00:12:08.425 fused_ordering(833) 00:12:08.425 fused_ordering(834) 00:12:08.425 fused_ordering(835) 00:12:08.425 fused_ordering(836) 00:12:08.425 fused_ordering(837) 00:12:08.425 fused_ordering(838) 00:12:08.425 fused_ordering(839) 00:12:08.425 fused_ordering(840) 00:12:08.425 fused_ordering(841) 00:12:08.425 fused_ordering(842) 00:12:08.425 fused_ordering(843) 00:12:08.425 fused_ordering(844) 00:12:08.425 fused_ordering(845) 00:12:08.425 fused_ordering(846) 00:12:08.425 fused_ordering(847) 00:12:08.425 fused_ordering(848) 00:12:08.425 fused_ordering(849) 00:12:08.425 fused_ordering(850) 00:12:08.425 fused_ordering(851) 00:12:08.425 fused_ordering(852) 00:12:08.425 fused_ordering(853) 00:12:08.425 fused_ordering(854) 00:12:08.425 fused_ordering(855) 00:12:08.425 fused_ordering(856) 00:12:08.425 fused_ordering(857) 00:12:08.425 fused_ordering(858) 00:12:08.425 fused_ordering(859) 00:12:08.425 fused_ordering(860) 00:12:08.425 fused_ordering(861) 00:12:08.425 fused_ordering(862) 00:12:08.425 fused_ordering(863) 00:12:08.425 fused_ordering(864) 00:12:08.425 fused_ordering(865) 00:12:08.425 fused_ordering(866) 00:12:08.425 fused_ordering(867) 00:12:08.425 fused_ordering(868) 00:12:08.425 fused_ordering(869) 00:12:08.425 fused_ordering(870) 00:12:08.425 fused_ordering(871) 00:12:08.425 fused_ordering(872) 00:12:08.425 fused_ordering(873) 00:12:08.425 fused_ordering(874) 00:12:08.425 fused_ordering(875) 00:12:08.425 fused_ordering(876) 00:12:08.425 fused_ordering(877) 00:12:08.425 fused_ordering(878) 00:12:08.425 fused_ordering(879) 00:12:08.425 fused_ordering(880) 00:12:08.425 fused_ordering(881) 00:12:08.425 fused_ordering(882) 00:12:08.425 fused_ordering(883) 00:12:08.425 fused_ordering(884) 00:12:08.425 fused_ordering(885) 00:12:08.425 fused_ordering(886) 00:12:08.425 fused_ordering(887) 00:12:08.425 fused_ordering(888) 00:12:08.425 fused_ordering(889) 00:12:08.425 fused_ordering(890) 00:12:08.425 fused_ordering(891) 00:12:08.425 fused_ordering(892) 00:12:08.425 fused_ordering(893) 00:12:08.425 fused_ordering(894) 00:12:08.425 fused_ordering(895) 00:12:08.425 fused_ordering(896) 00:12:08.425 fused_ordering(897) 00:12:08.425 fused_ordering(898) 00:12:08.425 fused_ordering(899) 00:12:08.425 fused_ordering(900) 00:12:08.425 fused_ordering(901) 00:12:08.425 fused_ordering(902) 00:12:08.425 fused_ordering(903) 00:12:08.425 fused_ordering(904) 00:12:08.425 fused_ordering(905) 00:12:08.425 fused_ordering(906) 00:12:08.425 fused_ordering(907) 00:12:08.425 fused_ordering(908) 00:12:08.425 fused_ordering(909) 00:12:08.425 fused_ordering(910) 00:12:08.425 fused_ordering(911) 00:12:08.425 fused_ordering(912) 00:12:08.425 fused_ordering(913) 00:12:08.425 fused_ordering(914) 00:12:08.425 fused_ordering(915) 00:12:08.425 fused_ordering(916) 00:12:08.425 fused_ordering(917) 00:12:08.425 fused_ordering(918) 00:12:08.425 fused_ordering(919) 00:12:08.425 fused_ordering(920) 00:12:08.425 fused_ordering(921) 00:12:08.425 fused_ordering(922) 00:12:08.425 fused_ordering(923) 00:12:08.425 fused_ordering(924) 00:12:08.425 fused_ordering(925) 00:12:08.425 fused_ordering(926) 00:12:08.425 fused_ordering(927) 00:12:08.425 fused_ordering(928) 00:12:08.425 fused_ordering(929) 00:12:08.425 fused_ordering(930) 00:12:08.425 fused_ordering(931) 00:12:08.425 fused_ordering(932) 00:12:08.425 fused_ordering(933) 00:12:08.425 fused_ordering(934) 00:12:08.425 fused_ordering(935) 00:12:08.425 fused_ordering(936) 00:12:08.425 fused_ordering(937) 00:12:08.425 fused_ordering(938) 00:12:08.425 fused_ordering(939) 00:12:08.425 fused_ordering(940) 00:12:08.425 fused_ordering(941) 00:12:08.425 fused_ordering(942) 00:12:08.425 fused_ordering(943) 00:12:08.425 fused_ordering(944) 00:12:08.425 fused_ordering(945) 00:12:08.425 fused_ordering(946) 00:12:08.425 fused_ordering(947) 00:12:08.425 fused_ordering(948) 00:12:08.425 fused_ordering(949) 00:12:08.425 fused_ordering(950) 00:12:08.425 fused_ordering(951) 00:12:08.425 fused_ordering(952) 00:12:08.425 fused_ordering(953) 00:12:08.425 fused_ordering(954) 00:12:08.425 fused_ordering(955) 00:12:08.425 fused_ordering(956) 00:12:08.425 fused_ordering(957) 00:12:08.425 fused_ordering(958) 00:12:08.425 fused_ordering(959) 00:12:08.425 fused_ordering(960) 00:12:08.425 fused_ordering(961) 00:12:08.425 fused_ordering(962) 00:12:08.425 fused_ordering(963) 00:12:08.425 fused_ordering(964) 00:12:08.425 fused_ordering(965) 00:12:08.425 fused_ordering(966) 00:12:08.425 fused_ordering(967) 00:12:08.425 fused_ordering(968) 00:12:08.425 fused_ordering(969) 00:12:08.425 fused_ordering(970) 00:12:08.425 fused_ordering(971) 00:12:08.425 fused_ordering(972) 00:12:08.425 fused_ordering(973) 00:12:08.425 fused_ordering(974) 00:12:08.425 fused_ordering(975) 00:12:08.425 fused_ordering(976) 00:12:08.425 fused_ordering(977) 00:12:08.425 fused_ordering(978) 00:12:08.425 fused_ordering(979) 00:12:08.425 fused_ordering(980) 00:12:08.425 fused_ordering(981) 00:12:08.425 fused_ordering(982) 00:12:08.425 fused_ordering(983) 00:12:08.425 fused_ordering(984) 00:12:08.425 fused_ordering(985) 00:12:08.425 fused_ordering(986) 00:12:08.425 fused_ordering(987) 00:12:08.425 fused_ordering(988) 00:12:08.425 fused_ordering(989) 00:12:08.425 fused_ordering(990) 00:12:08.425 fused_ordering(991) 00:12:08.425 fused_ordering(992) 00:12:08.425 fused_ordering(993) 00:12:08.425 fused_ordering(994) 00:12:08.425 fused_ordering(995) 00:12:08.425 fused_ordering(996) 00:12:08.425 fused_ordering(997) 00:12:08.425 fused_ordering(998) 00:12:08.425 fused_ordering(999) 00:12:08.425 fused_ordering(1000) 00:12:08.425 fused_ordering(1001) 00:12:08.425 fused_ordering(1002) 00:12:08.425 fused_ordering(1003) 00:12:08.425 fused_ordering(1004) 00:12:08.425 fused_ordering(1005) 00:12:08.425 fused_ordering(1006) 00:12:08.425 fused_ordering(1007) 00:12:08.425 fused_ordering(1008) 00:12:08.425 fused_ordering(1009) 00:12:08.425 fused_ordering(1010) 00:12:08.425 fused_ordering(1011) 00:12:08.425 fused_ordering(1012) 00:12:08.425 fused_ordering(1013) 00:12:08.425 fused_ordering(1014) 00:12:08.425 fused_ordering(1015) 00:12:08.425 fused_ordering(1016) 00:12:08.425 fused_ordering(1017) 00:12:08.425 fused_ordering(1018) 00:12:08.425 fused_ordering(1019) 00:12:08.425 fused_ordering(1020) 00:12:08.425 fused_ordering(1021) 00:12:08.425 fused_ordering(1022) 00:12:08.425 fused_ordering(1023) 00:12:08.425 19:27:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:08.425 19:27:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:08.425 19:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:08.425 19:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:08.425 19:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:08.425 19:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:08.425 19:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:08.425 19:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:08.425 rmmod nvme_tcp 00:12:08.425 rmmod nvme_fabrics 00:12:08.425 rmmod nvme_keyring 00:12:08.425 19:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:08.686 19:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:08.687 19:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:08.687 19:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3476935 ']' 00:12:08.687 19:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3476935 00:12:08.687 19:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 3476935 ']' 00:12:08.687 19:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 3476935 00:12:08.687 19:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:12:08.687 19:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:08.687 19:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3476935 00:12:08.687 19:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:08.687 19:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:08.687 19:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3476935' 00:12:08.687 killing process with pid 3476935 00:12:08.687 19:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 3476935 00:12:08.687 [2024-05-15 19:27:34.671304] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:08.687 19:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 3476935 00:12:08.687 19:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:08.687 19:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:08.687 19:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:08.687 19:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:08.687 19:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:08.687 19:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.687 19:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:08.687 19:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.233 19:27:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:11.233 00:12:11.233 real 0m14.603s 00:12:11.233 user 0m7.586s 00:12:11.233 sys 0m8.071s 00:12:11.233 19:27:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:11.233 19:27:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:11.233 ************************************ 00:12:11.233 END TEST nvmf_fused_ordering 00:12:11.233 ************************************ 00:12:11.233 19:27:36 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:11.233 19:27:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:11.233 19:27:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:11.233 19:27:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:11.233 ************************************ 00:12:11.233 START TEST nvmf_delete_subsystem 00:12:11.233 ************************************ 00:12:11.233 19:27:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:11.233 * Looking for test storage... 00:12:11.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:11.233 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.233 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:11.233 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.233 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:11.234 19:27:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:19.370 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:19.370 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:19.370 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:19.370 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:19.370 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:19.370 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:19.370 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:19.370 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:19.370 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:19.370 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:12:19.370 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:19.370 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:12:19.370 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:19.371 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:19.371 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:19.371 Found net devices under 0000:31:00.0: cvl_0_0 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:19.371 Found net devices under 0000:31:00.1: cvl_0_1 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:19.371 19:27:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:19.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:12:19.371 00:12:19.371 --- 10.0.0.2 ping statistics --- 00:12:19.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.371 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:19.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:12:19.371 00:12:19.371 --- 10.0.0.1 ping statistics --- 00:12:19.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.371 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3482439 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3482439 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 3482439 ']' 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:19.371 19:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:19.371 [2024-05-15 19:27:45.320727] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:12:19.371 [2024-05-15 19:27:45.320782] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.371 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.371 [2024-05-15 19:27:45.418766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:19.371 [2024-05-15 19:27:45.515667] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.371 [2024-05-15 19:27:45.515733] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.372 [2024-05-15 19:27:45.515742] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.372 [2024-05-15 19:27:45.515749] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.372 [2024-05-15 19:27:45.515755] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.372 [2024-05-15 19:27:45.515891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.372 [2024-05-15 19:27:45.515897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:20.313 [2024-05-15 19:27:46.237220] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:20.313 [2024-05-15 19:27:46.261225] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:20.313 [2024-05-15 19:27:46.261446] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:20.313 NULL1 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:20.313 Delay0 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3482658 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:20.313 19:27:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:20.313 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.313 [2024-05-15 19:27:46.358090] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:22.224 19:27:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.224 19:27:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.224 19:27:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 [2024-05-15 19:27:48.571004] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240c90 is same with the state(5) to be set 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 [2024-05-15 19:27:48.571358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2255c90 is same with the state(5) to be set 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.485 Write completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 Read completed with error (sct=0, sc=8) 00:12:22.485 starting I/O failed: -6 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 starting I/O failed: -6 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 starting I/O failed: -6 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 starting I/O failed: -6 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 starting I/O failed: -6 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 starting I/O failed: -6 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 starting I/O failed: -6 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 starting I/O failed: -6 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 [2024-05-15 19:27:48.575472] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdccc000c00 is same with the state(5) to be set 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:22.486 Write completed with error (sct=0, sc=8) 00:12:22.486 Read completed with error (sct=0, sc=8) 00:12:23.427 [2024-05-15 19:27:49.541302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225e250 is same with the state(5) to be set 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 [2024-05-15 19:27:49.575188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225f290 is same with the state(5) to be set 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 [2024-05-15 19:27:49.576126] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f8b0 is same with the state(5) to be set 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 [2024-05-15 19:27:49.577843] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdccc00c780 is same with the state(5) to be set 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Read completed with error (sct=0, sc=8) 00:12:23.427 Write completed with error (sct=0, sc=8) 00:12:23.428 Write completed with error (sct=0, sc=8) 00:12:23.428 Write completed with error (sct=0, sc=8) 00:12:23.428 Write completed with error (sct=0, sc=8) 00:12:23.428 Read completed with error (sct=0, sc=8) 00:12:23.428 Write completed with error (sct=0, sc=8) 00:12:23.428 Read completed with error (sct=0, sc=8) 00:12:23.428 [2024-05-15 19:27:49.578022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdccc00bfe0 is same with the state(5) to be set 00:12:23.428 Initializing NVMe Controllers 00:12:23.428 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:23.428 Controller IO queue size 128, less than required. 00:12:23.428 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:23.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:23.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:23.428 Initialization complete. Launching workers. 00:12:23.428 ======================================================== 00:12:23.428 Latency(us) 00:12:23.428 Device Information : IOPS MiB/s Average min max 00:12:23.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 179.88 0.09 908724.03 364.88 1006506.10 00:12:23.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.96 0.08 922250.21 274.65 1009308.66 00:12:23.428 ======================================================== 00:12:23.428 Total : 337.84 0.16 915048.22 274.65 1009308.66 00:12:23.428 00:12:23.428 [2024-05-15 19:27:49.578596] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x225e250 (9): Bad file descriptor 00:12:23.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:23.428 19:27:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.428 19:27:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:23.428 19:27:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3482658 00:12:23.428 19:27:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3482658 00:12:23.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3482658) - No such process 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3482658 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3482658 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3482658 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:23.999 [2024-05-15 19:27:50.108815] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3483341 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3483341 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:23.999 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:23.999 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.999 [2024-05-15 19:27:50.180458] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:24.570 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:24.570 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3483341 00:12:24.570 19:27:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:25.140 19:27:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:25.140 19:27:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3483341 00:12:25.140 19:27:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:25.711 19:27:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:25.711 19:27:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3483341 00:12:25.711 19:27:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:25.971 19:27:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:25.971 19:27:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3483341 00:12:25.971 19:27:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:26.543 19:27:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:26.543 19:27:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3483341 00:12:26.543 19:27:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:27.117 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:27.117 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3483341 00:12:27.117 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:27.377 Initializing NVMe Controllers 00:12:27.377 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:27.377 Controller IO queue size 128, less than required. 00:12:27.377 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:27.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:27.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:27.377 Initialization complete. Launching workers. 00:12:27.377 ======================================================== 00:12:27.377 Latency(us) 00:12:27.377 Device Information : IOPS MiB/s Average min max 00:12:27.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002109.90 1000242.64 1006038.40 00:12:27.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004089.01 1000274.61 1041621.02 00:12:27.377 ======================================================== 00:12:27.377 Total : 256.00 0.12 1003099.45 1000242.64 1041621.02 00:12:27.377 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3483341 00:12:27.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3483341) - No such process 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3483341 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:27.638 rmmod nvme_tcp 00:12:27.638 rmmod nvme_fabrics 00:12:27.638 rmmod nvme_keyring 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3482439 ']' 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3482439 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 3482439 ']' 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 3482439 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3482439 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3482439' 00:12:27.638 killing process with pid 3482439 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 3482439 00:12:27.638 [2024-05-15 19:27:53.788698] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:27.638 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 3482439 00:12:27.900 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:27.900 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:27.900 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:27.900 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:27.900 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:27.900 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.900 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.900 19:27:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.810 19:27:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:30.071 00:12:30.071 real 0m19.033s 00:12:30.071 user 0m31.497s 00:12:30.071 sys 0m6.973s 00:12:30.071 19:27:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:30.071 19:27:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:30.071 ************************************ 00:12:30.071 END TEST nvmf_delete_subsystem 00:12:30.071 ************************************ 00:12:30.071 19:27:56 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:30.071 19:27:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:30.071 19:27:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:30.071 19:27:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:30.071 ************************************ 00:12:30.071 START TEST nvmf_ns_masking 00:12:30.071 ************************************ 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:30.071 * Looking for test storage... 00:12:30.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=db10f50a-e3a5-4832-99a5-2f749b49ef67 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:30.071 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:30.072 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:30.072 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.072 19:27:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.072 19:27:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.072 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:30.072 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:30.072 19:27:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:12:30.072 19:27:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:38.211 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:38.211 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:38.211 Found net devices under 0000:31:00.0: cvl_0_0 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:38.211 Found net devices under 0000:31:00.1: cvl_0_1 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.211 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.471 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.471 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.471 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:38.471 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.471 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.471 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.471 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:38.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:12:38.471 00:12:38.471 --- 10.0.0.2 ping statistics --- 00:12:38.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.471 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:12:38.471 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.365 ms 00:12:38.471 00:12:38.471 --- 10.0.0.1 ping statistics --- 00:12:38.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.471 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:12:38.471 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.471 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:38.471 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:38.471 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.471 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:38.471 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:38.471 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.471 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:38.471 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:38.732 19:28:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:12:38.732 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:38.732 19:28:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:38.732 19:28:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:38.732 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3488887 00:12:38.732 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3488887 00:12:38.732 19:28:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 3488887 ']' 00:12:38.732 19:28:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.732 19:28:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:38.732 19:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:38.732 19:28:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.732 19:28:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:38.732 19:28:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:38.732 [2024-05-15 19:28:04.747841] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:12:38.732 [2024-05-15 19:28:04.747913] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.732 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.732 [2024-05-15 19:28:04.842435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.993 [2024-05-15 19:28:04.941625] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.993 [2024-05-15 19:28:04.941684] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.993 [2024-05-15 19:28:04.941692] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.993 [2024-05-15 19:28:04.941699] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.993 [2024-05-15 19:28:04.941705] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.993 [2024-05-15 19:28:04.941835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.993 [2024-05-15 19:28:04.941970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.993 [2024-05-15 19:28:04.942136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.993 [2024-05-15 19:28:04.942137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.615 19:28:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:39.615 19:28:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:12:39.615 19:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:39.615 19:28:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:39.615 19:28:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:39.615 19:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.615 19:28:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:39.875 [2024-05-15 19:28:05.851653] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.875 19:28:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:12:39.875 19:28:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:12:39.875 19:28:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:40.136 Malloc1 00:12:40.136 19:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:40.136 Malloc2 00:12:40.396 19:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:40.396 19:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:40.656 19:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.916 [2024-05-15 19:28:06.944496] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:40.916 [2024-05-15 19:28:06.944746] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.916 19:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:12:40.916 19:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I db10f50a-e3a5-4832-99a5-2f749b49ef67 -a 10.0.0.2 -s 4420 -i 4 00:12:40.916 19:28:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.916 19:28:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:12:40.916 19:28:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.175 19:28:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:41.175 19:28:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:12:43.084 19:28:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:43.084 19:28:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:43.084 19:28:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.084 19:28:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:43.084 19:28:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.084 19:28:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:12:43.084 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:43.084 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:43.084 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:43.084 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:43.084 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:12:43.084 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:43.084 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:43.084 [ 0]:0x1 00:12:43.084 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:43.084 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:43.345 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d1852b93a88648d78049c0d4599521e4 00:12:43.345 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d1852b93a88648d78049c0d4599521e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.345 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:43.345 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:12:43.345 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:43.345 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:43.345 [ 0]:0x1 00:12:43.345 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:43.345 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:43.605 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d1852b93a88648d78049c0d4599521e4 00:12:43.605 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d1852b93a88648d78049c0d4599521e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.605 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:12:43.605 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:43.605 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:43.605 [ 1]:0x2 00:12:43.605 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:43.605 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:43.605 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=826f3076a0fb4ea8bbb84b2b8a7dcbf7 00:12:43.605 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 826f3076a0fb4ea8bbb84b2b8a7dcbf7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.605 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:12:43.605 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.605 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.865 19:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:44.125 19:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:12:44.125 19:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I db10f50a-e3a5-4832-99a5-2f749b49ef67 -a 10.0.0.2 -s 4420 -i 4 00:12:44.125 19:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:44.125 19:28:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:12:44.125 19:28:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.125 19:28:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:12:44.125 19:28:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:12:44.125 19:28:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:46.669 [ 0]:0x2 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=826f3076a0fb4ea8bbb84b2b8a7dcbf7 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 826f3076a0fb4ea8bbb84b2b8a7dcbf7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:46.669 [ 0]:0x1 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d1852b93a88648d78049c0d4599521e4 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d1852b93a88648d78049c0d4599521e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:46.669 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:46.929 [ 1]:0x2 00:12:46.929 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:46.929 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:46.929 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=826f3076a0fb4ea8bbb84b2b8a7dcbf7 00:12:46.929 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 826f3076a0fb4ea8bbb84b2b8a7dcbf7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:46.929 19:28:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:47.190 [ 0]:0x2 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=826f3076a0fb4ea8bbb84b2b8a7dcbf7 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 826f3076a0fb4ea8bbb84b2b8a7dcbf7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.190 19:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:47.450 19:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:12:47.450 19:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I db10f50a-e3a5-4832-99a5-2f749b49ef67 -a 10.0.0.2 -s 4420 -i 4 00:12:47.710 19:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:47.710 19:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:12:47.710 19:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.710 19:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:12:47.710 19:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:12:47.710 19:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:12:49.621 19:28:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:49.621 19:28:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:49.621 19:28:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.621 19:28:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:12:49.621 19:28:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.621 19:28:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:12:49.621 19:28:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:49.621 19:28:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:49.881 19:28:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:49.881 19:28:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:49.881 19:28:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:12:49.881 19:28:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:49.881 19:28:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:49.881 [ 0]:0x1 00:12:49.881 19:28:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:49.881 19:28:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:49.881 19:28:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d1852b93a88648d78049c0d4599521e4 00:12:49.881 19:28:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d1852b93a88648d78049c0d4599521e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:49.881 19:28:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:12:49.881 19:28:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:49.881 19:28:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:49.881 [ 1]:0x2 00:12:49.881 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:49.881 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:49.881 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=826f3076a0fb4ea8bbb84b2b8a7dcbf7 00:12:49.881 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 826f3076a0fb4ea8bbb84b2b8a7dcbf7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:49.881 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:50.142 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:12:50.142 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:50.142 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:50.142 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:50.142 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:50.142 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:50.142 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:50.142 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:50.142 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:50.142 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:50.142 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:50.142 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:50.402 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:50.402 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:50.402 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:50.402 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:50.402 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:50.402 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:50.402 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:12:50.402 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:50.402 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:50.402 [ 0]:0x2 00:12:50.402 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:50.402 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:50.402 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=826f3076a0fb4ea8bbb84b2b8a7dcbf7 00:12:50.402 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 826f3076a0fb4ea8bbb84b2b8a7dcbf7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:50.402 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:50.402 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:50.403 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:50.403 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:50.403 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:50.403 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:50.403 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:50.403 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:50.403 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:50.403 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:50.403 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:50.403 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:50.663 [2024-05-15 19:28:16.591798] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:50.663 request: 00:12:50.663 { 00:12:50.663 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:50.663 "nsid": 2, 00:12:50.663 "host": "nqn.2016-06.io.spdk:host1", 00:12:50.663 "method": "nvmf_ns_remove_host", 00:12:50.663 "req_id": 1 00:12:50.663 } 00:12:50.663 Got JSON-RPC error response 00:12:50.663 response: 00:12:50.663 { 00:12:50.663 "code": -32602, 00:12:50.663 "message": "Invalid parameters" 00:12:50.663 } 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:50.663 [ 0]:0x2 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=826f3076a0fb4ea8bbb84b2b8a7dcbf7 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 826f3076a0fb4ea8bbb84b2b8a7dcbf7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:12:50.663 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.924 19:28:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:51.184 rmmod nvme_tcp 00:12:51.184 rmmod nvme_fabrics 00:12:51.184 rmmod nvme_keyring 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3488887 ']' 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3488887 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 3488887 ']' 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 3488887 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3488887 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3488887' 00:12:51.184 killing process with pid 3488887 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 3488887 00:12:51.184 [2024-05-15 19:28:17.260147] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:51.184 19:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 3488887 00:12:51.445 19:28:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:51.445 19:28:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:51.445 19:28:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:51.445 19:28:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:51.445 19:28:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:51.445 19:28:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.445 19:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.445 19:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.355 19:28:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:53.355 00:12:53.355 real 0m23.414s 00:12:53.355 user 0m55.482s 00:12:53.355 sys 0m8.000s 00:12:53.355 19:28:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:53.355 19:28:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:53.355 ************************************ 00:12:53.355 END TEST nvmf_ns_masking 00:12:53.355 ************************************ 00:12:53.355 19:28:19 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:53.355 19:28:19 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:53.355 19:28:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:53.355 19:28:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:53.355 19:28:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:53.616 ************************************ 00:12:53.616 START TEST nvmf_nvme_cli 00:12:53.616 ************************************ 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:53.616 * Looking for test storage... 00:12:53.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:53.616 19:28:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:01.916 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:01.916 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:01.916 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:01.917 Found net devices under 0000:31:00.0: cvl_0_0 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:01.917 Found net devices under 0000:31:00.1: cvl_0_1 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:01.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:01.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.722 ms 00:13:01.917 00:13:01.917 --- 10.0.0.2 ping statistics --- 00:13:01.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.917 rtt min/avg/max/mdev = 0.722/0.722/0.722/0.000 ms 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:01.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:01.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:13:01.917 00:13:01.917 --- 10.0.0.1 ping statistics --- 00:13:01.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.917 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:01.917 19:28:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:01.917 19:28:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:01.917 19:28:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:01.917 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:01.917 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.917 19:28:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3496212 00:13:01.917 19:28:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3496212 00:13:01.917 19:28:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:01.917 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 3496212 ']' 00:13:01.917 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.917 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:01.917 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.917 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:01.917 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.917 [2024-05-15 19:28:28.077778] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:13:01.917 [2024-05-15 19:28:28.077825] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.177 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.177 [2024-05-15 19:28:28.168706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.177 [2024-05-15 19:28:28.248123] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.177 [2024-05-15 19:28:28.248187] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.177 [2024-05-15 19:28:28.248196] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.177 [2024-05-15 19:28:28.248203] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.177 [2024-05-15 19:28:28.248209] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.178 [2024-05-15 19:28:28.248361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.178 [2024-05-15 19:28:28.248433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.178 [2024-05-15 19:28:28.248606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.178 [2024-05-15 19:28:28.248606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.750 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:02.750 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:13:02.750 19:28:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:02.750 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:02.750 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.011 19:28:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.011 19:28:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:03.011 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.011 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.011 [2024-05-15 19:28:28.949019] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.011 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.011 19:28:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:03.011 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.011 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.011 Malloc0 00:13:03.011 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.011 19:28:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:03.011 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.011 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.011 Malloc1 00:13:03.011 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.011 19:28:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:03.011 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.011 19:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.011 19:28:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.011 19:28:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:03.011 19:28:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.011 19:28:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.011 19:28:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.011 19:28:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:03.011 19:28:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.011 19:28:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.011 19:28:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.011 19:28:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.011 19:28:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.011 19:28:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.011 [2024-05-15 19:28:29.038756] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:03.011 [2024-05-15 19:28:29.039017] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.011 19:28:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.011 19:28:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:03.011 19:28:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.011 19:28:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.011 19:28:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.011 19:28:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:13:03.272 00:13:03.272 Discovery Log Number of Records 2, Generation counter 2 00:13:03.272 =====Discovery Log Entry 0====== 00:13:03.272 trtype: tcp 00:13:03.272 adrfam: ipv4 00:13:03.272 subtype: current discovery subsystem 00:13:03.272 treq: not required 00:13:03.272 portid: 0 00:13:03.272 trsvcid: 4420 00:13:03.272 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:03.272 traddr: 10.0.0.2 00:13:03.272 eflags: explicit discovery connections, duplicate discovery information 00:13:03.272 sectype: none 00:13:03.272 =====Discovery Log Entry 1====== 00:13:03.272 trtype: tcp 00:13:03.272 adrfam: ipv4 00:13:03.272 subtype: nvme subsystem 00:13:03.272 treq: not required 00:13:03.272 portid: 0 00:13:03.272 trsvcid: 4420 00:13:03.272 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:03.272 traddr: 10.0.0.2 00:13:03.272 eflags: none 00:13:03.272 sectype: none 00:13:03.272 19:28:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:03.272 19:28:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:03.272 19:28:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:03.272 19:28:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:03.272 19:28:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:03.272 19:28:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:03.272 19:28:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:03.272 19:28:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:03.272 19:28:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:03.272 19:28:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:03.272 19:28:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.657 19:28:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:04.657 19:28:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:13:04.657 19:28:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.657 19:28:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:13:04.657 19:28:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:13:04.657 19:28:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:13:06.567 19:28:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:06.567 19:28:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:06.567 19:28:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.567 19:28:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:13:06.568 19:28:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.568 19:28:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:13:06.568 19:28:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:06.568 19:28:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:06.568 19:28:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:06.568 19:28:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:06.829 19:28:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:06.829 19:28:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:06.829 19:28:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:06.829 19:28:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:06.829 19:28:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:06.829 19:28:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:06.829 19:28:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:06.829 19:28:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:06.829 19:28:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:06.829 19:28:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:06.829 19:28:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:06.829 /dev/nvme0n1 ]] 00:13:06.829 19:28:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:06.829 19:28:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:06.829 19:28:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:06.829 19:28:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:06.829 19:28:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:07.089 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:07.089 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:07.089 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:07.089 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:07.089 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:07.089 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:07.089 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:07.089 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:07.089 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:07.089 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:07.089 19:28:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:07.089 19:28:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.349 19:28:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.349 19:28:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:13:07.350 19:28:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:07.350 19:28:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.350 19:28:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:07.350 19:28:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.350 19:28:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:13:07.350 19:28:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:07.350 19:28:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.350 19:28:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.350 19:28:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:07.350 19:28:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.350 19:28:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:07.350 19:28:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:07.350 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:07.350 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:13:07.350 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:07.350 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:13:07.350 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:07.350 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:07.350 rmmod nvme_tcp 00:13:07.350 rmmod nvme_fabrics 00:13:07.350 rmmod nvme_keyring 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3496212 ']' 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3496212 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 3496212 ']' 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 3496212 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3496212 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3496212' 00:13:07.610 killing process with pid 3496212 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 3496212 00:13:07.610 [2024-05-15 19:28:33.612329] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 3496212 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.610 19:28:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.154 19:28:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:10.154 00:13:10.154 real 0m16.266s 00:13:10.154 user 0m24.429s 00:13:10.154 sys 0m6.796s 00:13:10.154 19:28:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:10.154 19:28:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:10.154 ************************************ 00:13:10.154 END TEST nvmf_nvme_cli 00:13:10.154 ************************************ 00:13:10.154 19:28:35 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:13:10.154 19:28:35 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:10.154 19:28:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:10.154 19:28:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:10.154 19:28:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:10.154 ************************************ 00:13:10.154 START TEST nvmf_vfio_user 00:13:10.154 ************************************ 00:13:10.154 19:28:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:10.154 * Looking for test storage... 00:13:10.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:10.154 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3497909 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3497909' 00:13:10.155 Process pid: 3497909 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3497909 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3497909 ']' 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:10.155 19:28:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:10.155 [2024-05-15 19:28:36.126938] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:13:10.155 [2024-05-15 19:28:36.127011] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.155 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.155 [2024-05-15 19:28:36.216441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:10.155 [2024-05-15 19:28:36.288281] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.155 [2024-05-15 19:28:36.288321] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.155 [2024-05-15 19:28:36.288329] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.155 [2024-05-15 19:28:36.288336] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.155 [2024-05-15 19:28:36.288342] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.155 [2024-05-15 19:28:36.288536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.155 [2024-05-15 19:28:36.288713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.155 [2024-05-15 19:28:36.288871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.155 [2024-05-15 19:28:36.288872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.095 19:28:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:11.095 19:28:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:13:11.095 19:28:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:12.038 19:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:12.299 19:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:12.299 19:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:12.299 19:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:12.299 19:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:12.299 19:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:12.299 Malloc1 00:13:12.299 19:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:12.559 19:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:12.819 19:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:13.079 [2024-05-15 19:28:39.080866] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:13.079 19:28:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:13.079 19:28:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:13.079 19:28:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:13.339 Malloc2 00:13:13.339 19:28:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:13.600 19:28:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:13.600 19:28:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:13.860 19:28:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:13.860 19:28:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:13.860 19:28:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:13.861 19:28:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:13.861 19:28:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:13.861 19:28:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:13.861 [2024-05-15 19:28:40.021550] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:13:13.861 [2024-05-15 19:28:40.021617] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3498717 ] 00:13:13.861 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.123 [2024-05-15 19:28:40.052984] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:14.123 [2024-05-15 19:28:40.057373] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:14.123 [2024-05-15 19:28:40.057394] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ffbf482a000 00:13:14.123 [2024-05-15 19:28:40.058370] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:14.123 [2024-05-15 19:28:40.059379] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:14.123 [2024-05-15 19:28:40.060384] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:14.123 [2024-05-15 19:28:40.061388] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:14.123 [2024-05-15 19:28:40.062398] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:14.123 [2024-05-15 19:28:40.063399] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:14.123 [2024-05-15 19:28:40.064409] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:14.123 [2024-05-15 19:28:40.065412] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:14.123 [2024-05-15 19:28:40.066421] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:14.123 [2024-05-15 19:28:40.066454] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ffbf481f000 00:13:14.123 [2024-05-15 19:28:40.067961] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:14.123 [2024-05-15 19:28:40.089470] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:14.123 [2024-05-15 19:28:40.089493] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:14.123 [2024-05-15 19:28:40.091570] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:14.123 [2024-05-15 19:28:40.091617] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:14.123 [2024-05-15 19:28:40.091706] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:14.123 [2024-05-15 19:28:40.091722] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:14.123 [2024-05-15 19:28:40.091727] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:14.123 [2024-05-15 19:28:40.092565] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:14.123 [2024-05-15 19:28:40.092574] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:14.123 [2024-05-15 19:28:40.092581] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:14.123 [2024-05-15 19:28:40.093574] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:14.123 [2024-05-15 19:28:40.093582] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:14.123 [2024-05-15 19:28:40.093590] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:14.123 [2024-05-15 19:28:40.094579] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:14.123 [2024-05-15 19:28:40.094586] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:14.123 [2024-05-15 19:28:40.095583] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:14.123 [2024-05-15 19:28:40.095590] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:14.123 [2024-05-15 19:28:40.095595] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:14.123 [2024-05-15 19:28:40.095601] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:14.123 [2024-05-15 19:28:40.095707] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:14.123 [2024-05-15 19:28:40.095712] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:14.123 [2024-05-15 19:28:40.095717] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:14.123 [2024-05-15 19:28:40.096591] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:14.123 [2024-05-15 19:28:40.097592] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:14.123 [2024-05-15 19:28:40.098603] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:14.123 [2024-05-15 19:28:40.099603] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:14.123 [2024-05-15 19:28:40.099668] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:14.123 [2024-05-15 19:28:40.100619] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:14.123 [2024-05-15 19:28:40.100630] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:14.124 [2024-05-15 19:28:40.100635] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:14.124 [2024-05-15 19:28:40.100656] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:14.124 [2024-05-15 19:28:40.100668] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:14.124 [2024-05-15 19:28:40.100684] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:14.124 [2024-05-15 19:28:40.100689] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:14.124 [2024-05-15 19:28:40.100702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:14.124 [2024-05-15 19:28:40.100746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:14.124 [2024-05-15 19:28:40.100755] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:14.124 [2024-05-15 19:28:40.100760] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:14.124 [2024-05-15 19:28:40.100764] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:14.124 [2024-05-15 19:28:40.100769] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:14.124 [2024-05-15 19:28:40.100773] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:14.124 [2024-05-15 19:28:40.100778] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:14.124 [2024-05-15 19:28:40.100782] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:14.124 [2024-05-15 19:28:40.100792] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:14.124 [2024-05-15 19:28:40.100805] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:14.124 [2024-05-15 19:28:40.100821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:14.124 [2024-05-15 19:28:40.100833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:14.124 [2024-05-15 19:28:40.100842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:14.124 [2024-05-15 19:28:40.100850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:14.124 [2024-05-15 19:28:40.100858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:14.124 [2024-05-15 19:28:40.100863] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:14.124 [2024-05-15 19:28:40.100869] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:14.124 [2024-05-15 19:28:40.100878] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:14.124 [2024-05-15 19:28:40.100887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:14.124 [2024-05-15 19:28:40.100895] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:14.124 [2024-05-15 19:28:40.100902] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:14.124 [2024-05-15 19:28:40.100908] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:14.124 [2024-05-15 19:28:40.100914] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:14.124 [2024-05-15 19:28:40.100922] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:14.124 [2024-05-15 19:28:40.100939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:14.124 [2024-05-15 19:28:40.100990] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:14.124 [2024-05-15 19:28:40.100997] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:14.124 [2024-05-15 19:28:40.101005] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:14.124 [2024-05-15 19:28:40.101009] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:14.124 [2024-05-15 19:28:40.101015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:14.124 [2024-05-15 19:28:40.101032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:14.124 [2024-05-15 19:28:40.101044] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:14.124 [2024-05-15 19:28:40.101052] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:14.124 [2024-05-15 19:28:40.101059] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:14.124 [2024-05-15 19:28:40.101066] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:14.124 [2024-05-15 19:28:40.101070] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:14.124 [2024-05-15 19:28:40.101076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:14.124 [2024-05-15 19:28:40.101099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:14.124 [2024-05-15 19:28:40.101109] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:14.124 [2024-05-15 19:28:40.101116] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:14.124 [2024-05-15 19:28:40.101123] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:14.124 [2024-05-15 19:28:40.101127] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:14.124 [2024-05-15 19:28:40.101133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:14.124 [2024-05-15 19:28:40.101148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:14.124 [2024-05-15 19:28:40.101160] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:14.124 [2024-05-15 19:28:40.101166] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:14.124 [2024-05-15 19:28:40.101173] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:14.124 [2024-05-15 19:28:40.101179] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:14.124 [2024-05-15 19:28:40.101184] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:14.124 [2024-05-15 19:28:40.101189] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:14.124 [2024-05-15 19:28:40.101193] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:14.124 [2024-05-15 19:28:40.101198] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:14.124 [2024-05-15 19:28:40.101217] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:14.124 [2024-05-15 19:28:40.101231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:14.124 [2024-05-15 19:28:40.101243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:14.124 [2024-05-15 19:28:40.101253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:14.124 [2024-05-15 19:28:40.101264] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:14.124 [2024-05-15 19:28:40.101277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:14.124 [2024-05-15 19:28:40.101288] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:14.124 [2024-05-15 19:28:40.101297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:14.124 [2024-05-15 19:28:40.101307] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:14.124 [2024-05-15 19:28:40.101311] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:14.124 [2024-05-15 19:28:40.101320] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:14.124 [2024-05-15 19:28:40.101324] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:14.124 [2024-05-15 19:28:40.101330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:14.124 [2024-05-15 19:28:40.101337] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:14.124 [2024-05-15 19:28:40.101342] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:14.124 [2024-05-15 19:28:40.101348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:14.124 [2024-05-15 19:28:40.101355] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:14.124 [2024-05-15 19:28:40.101359] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:14.124 [2024-05-15 19:28:40.101365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:14.124 [2024-05-15 19:28:40.101377] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:14.124 [2024-05-15 19:28:40.101381] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:14.124 [2024-05-15 19:28:40.101387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:14.124 [2024-05-15 19:28:40.101394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:14.124 [2024-05-15 19:28:40.101408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:14.124 [2024-05-15 19:28:40.101417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:14.124 [2024-05-15 19:28:40.101425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:14.124 ===================================================== 00:13:14.124 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:14.124 ===================================================== 00:13:14.124 Controller Capabilities/Features 00:13:14.124 ================================ 00:13:14.124 Vendor ID: 4e58 00:13:14.124 Subsystem Vendor ID: 4e58 00:13:14.124 Serial Number: SPDK1 00:13:14.124 Model Number: SPDK bdev Controller 00:13:14.124 Firmware Version: 24.05 00:13:14.124 Recommended Arb Burst: 6 00:13:14.124 IEEE OUI Identifier: 8d 6b 50 00:13:14.124 Multi-path I/O 00:13:14.124 May have multiple subsystem ports: Yes 00:13:14.124 May have multiple controllers: Yes 00:13:14.124 Associated with SR-IOV VF: No 00:13:14.124 Max Data Transfer Size: 131072 00:13:14.124 Max Number of Namespaces: 32 00:13:14.124 Max Number of I/O Queues: 127 00:13:14.124 NVMe Specification Version (VS): 1.3 00:13:14.124 NVMe Specification Version (Identify): 1.3 00:13:14.124 Maximum Queue Entries: 256 00:13:14.124 Contiguous Queues Required: Yes 00:13:14.124 Arbitration Mechanisms Supported 00:13:14.124 Weighted Round Robin: Not Supported 00:13:14.124 Vendor Specific: Not Supported 00:13:14.124 Reset Timeout: 15000 ms 00:13:14.124 Doorbell Stride: 4 bytes 00:13:14.124 NVM Subsystem Reset: Not Supported 00:13:14.124 Command Sets Supported 00:13:14.124 NVM Command Set: Supported 00:13:14.124 Boot Partition: Not Supported 00:13:14.125 Memory Page Size Minimum: 4096 bytes 00:13:14.125 Memory Page Size Maximum: 4096 bytes 00:13:14.125 Persistent Memory Region: Not Supported 00:13:14.125 Optional Asynchronous Events Supported 00:13:14.125 Namespace Attribute Notices: Supported 00:13:14.125 Firmware Activation Notices: Not Supported 00:13:14.125 ANA Change Notices: Not Supported 00:13:14.125 PLE Aggregate Log Change Notices: Not Supported 00:13:14.125 LBA Status Info Alert Notices: Not Supported 00:13:14.125 EGE Aggregate Log Change Notices: Not Supported 00:13:14.125 Normal NVM Subsystem Shutdown event: Not Supported 00:13:14.125 Zone Descriptor Change Notices: Not Supported 00:13:14.125 Discovery Log Change Notices: Not Supported 00:13:14.125 Controller Attributes 00:13:14.125 128-bit Host Identifier: Supported 00:13:14.125 Non-Operational Permissive Mode: Not Supported 00:13:14.125 NVM Sets: Not Supported 00:13:14.125 Read Recovery Levels: Not Supported 00:13:14.125 Endurance Groups: Not Supported 00:13:14.125 Predictable Latency Mode: Not Supported 00:13:14.125 Traffic Based Keep ALive: Not Supported 00:13:14.125 Namespace Granularity: Not Supported 00:13:14.125 SQ Associations: Not Supported 00:13:14.125 UUID List: Not Supported 00:13:14.125 Multi-Domain Subsystem: Not Supported 00:13:14.125 Fixed Capacity Management: Not Supported 00:13:14.125 Variable Capacity Management: Not Supported 00:13:14.125 Delete Endurance Group: Not Supported 00:13:14.125 Delete NVM Set: Not Supported 00:13:14.125 Extended LBA Formats Supported: Not Supported 00:13:14.125 Flexible Data Placement Supported: Not Supported 00:13:14.125 00:13:14.125 Controller Memory Buffer Support 00:13:14.125 ================================ 00:13:14.125 Supported: No 00:13:14.125 00:13:14.125 Persistent Memory Region Support 00:13:14.125 ================================ 00:13:14.125 Supported: No 00:13:14.125 00:13:14.125 Admin Command Set Attributes 00:13:14.125 ============================ 00:13:14.125 Security Send/Receive: Not Supported 00:13:14.125 Format NVM: Not Supported 00:13:14.125 Firmware Activate/Download: Not Supported 00:13:14.125 Namespace Management: Not Supported 00:13:14.125 Device Self-Test: Not Supported 00:13:14.125 Directives: Not Supported 00:13:14.125 NVMe-MI: Not Supported 00:13:14.125 Virtualization Management: Not Supported 00:13:14.125 Doorbell Buffer Config: Not Supported 00:13:14.125 Get LBA Status Capability: Not Supported 00:13:14.125 Command & Feature Lockdown Capability: Not Supported 00:13:14.125 Abort Command Limit: 4 00:13:14.125 Async Event Request Limit: 4 00:13:14.125 Number of Firmware Slots: N/A 00:13:14.125 Firmware Slot 1 Read-Only: N/A 00:13:14.125 Firmware Activation Without Reset: N/A 00:13:14.125 Multiple Update Detection Support: N/A 00:13:14.125 Firmware Update Granularity: No Information Provided 00:13:14.125 Per-Namespace SMART Log: No 00:13:14.125 Asymmetric Namespace Access Log Page: Not Supported 00:13:14.125 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:14.125 Command Effects Log Page: Supported 00:13:14.125 Get Log Page Extended Data: Supported 00:13:14.125 Telemetry Log Pages: Not Supported 00:13:14.125 Persistent Event Log Pages: Not Supported 00:13:14.125 Supported Log Pages Log Page: May Support 00:13:14.125 Commands Supported & Effects Log Page: Not Supported 00:13:14.125 Feature Identifiers & Effects Log Page:May Support 00:13:14.125 NVMe-MI Commands & Effects Log Page: May Support 00:13:14.125 Data Area 4 for Telemetry Log: Not Supported 00:13:14.125 Error Log Page Entries Supported: 128 00:13:14.125 Keep Alive: Supported 00:13:14.125 Keep Alive Granularity: 10000 ms 00:13:14.125 00:13:14.125 NVM Command Set Attributes 00:13:14.125 ========================== 00:13:14.125 Submission Queue Entry Size 00:13:14.125 Max: 64 00:13:14.125 Min: 64 00:13:14.125 Completion Queue Entry Size 00:13:14.125 Max: 16 00:13:14.125 Min: 16 00:13:14.125 Number of Namespaces: 32 00:13:14.125 Compare Command: Supported 00:13:14.125 Write Uncorrectable Command: Not Supported 00:13:14.125 Dataset Management Command: Supported 00:13:14.125 Write Zeroes Command: Supported 00:13:14.125 Set Features Save Field: Not Supported 00:13:14.125 Reservations: Not Supported 00:13:14.125 Timestamp: Not Supported 00:13:14.125 Copy: Supported 00:13:14.125 Volatile Write Cache: Present 00:13:14.125 Atomic Write Unit (Normal): 1 00:13:14.125 Atomic Write Unit (PFail): 1 00:13:14.125 Atomic Compare & Write Unit: 1 00:13:14.125 Fused Compare & Write: Supported 00:13:14.125 Scatter-Gather List 00:13:14.125 SGL Command Set: Supported (Dword aligned) 00:13:14.125 SGL Keyed: Not Supported 00:13:14.125 SGL Bit Bucket Descriptor: Not Supported 00:13:14.125 SGL Metadata Pointer: Not Supported 00:13:14.125 Oversized SGL: Not Supported 00:13:14.125 SGL Metadata Address: Not Supported 00:13:14.125 SGL Offset: Not Supported 00:13:14.125 Transport SGL Data Block: Not Supported 00:13:14.125 Replay Protected Memory Block: Not Supported 00:13:14.125 00:13:14.125 Firmware Slot Information 00:13:14.125 ========================= 00:13:14.125 Active slot: 1 00:13:14.125 Slot 1 Firmware Revision: 24.05 00:13:14.125 00:13:14.125 00:13:14.125 Commands Supported and Effects 00:13:14.125 ============================== 00:13:14.125 Admin Commands 00:13:14.125 -------------- 00:13:14.125 Get Log Page (02h): Supported 00:13:14.125 Identify (06h): Supported 00:13:14.125 Abort (08h): Supported 00:13:14.125 Set Features (09h): Supported 00:13:14.125 Get Features (0Ah): Supported 00:13:14.125 Asynchronous Event Request (0Ch): Supported 00:13:14.125 Keep Alive (18h): Supported 00:13:14.125 I/O Commands 00:13:14.125 ------------ 00:13:14.125 Flush (00h): Supported LBA-Change 00:13:14.125 Write (01h): Supported LBA-Change 00:13:14.125 Read (02h): Supported 00:13:14.125 Compare (05h): Supported 00:13:14.125 Write Zeroes (08h): Supported LBA-Change 00:13:14.125 Dataset Management (09h): Supported LBA-Change 00:13:14.125 Copy (19h): Supported LBA-Change 00:13:14.125 Unknown (79h): Supported LBA-Change 00:13:14.125 Unknown (7Ah): Supported 00:13:14.125 00:13:14.125 Error Log 00:13:14.125 ========= 00:13:14.125 00:13:14.125 Arbitration 00:13:14.125 =========== 00:13:14.125 Arbitration Burst: 1 00:13:14.125 00:13:14.125 Power Management 00:13:14.125 ================ 00:13:14.125 Number of Power States: 1 00:13:14.125 Current Power State: Power State #0 00:13:14.125 Power State #0: 00:13:14.125 Max Power: 0.00 W 00:13:14.125 Non-Operational State: Operational 00:13:14.125 Entry Latency: Not Reported 00:13:14.125 Exit Latency: Not Reported 00:13:14.125 Relative Read Throughput: 0 00:13:14.125 Relative Read Latency: 0 00:13:14.125 Relative Write Throughput: 0 00:13:14.125 Relative Write Latency: 0 00:13:14.125 Idle Power: Not Reported 00:13:14.125 Active Power: Not Reported 00:13:14.125 Non-Operational Permissive Mode: Not Supported 00:13:14.125 00:13:14.125 Health Information 00:13:14.125 ================== 00:13:14.125 Critical Warnings: 00:13:14.125 Available Spare Space: OK 00:13:14.125 Temperature: OK 00:13:14.125 Device Reliability: OK 00:13:14.125 Read Only: No 00:13:14.125 Volatile Memory Backup: OK 00:13:14.125 Current Temperature: 0 Kelvin (-2[2024-05-15 19:28:40.101528] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:14.125 [2024-05-15 19:28:40.101539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:14.125 [2024-05-15 19:28:40.101564] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:14.125 [2024-05-15 19:28:40.101572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:14.125 [2024-05-15 19:28:40.101579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:14.125 [2024-05-15 19:28:40.101585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:14.125 [2024-05-15 19:28:40.101591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:14.125 [2024-05-15 19:28:40.104321] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:14.125 [2024-05-15 19:28:40.104332] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:14.125 [2024-05-15 19:28:40.104635] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:14.125 [2024-05-15 19:28:40.104685] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:14.125 [2024-05-15 19:28:40.104691] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:14.125 [2024-05-15 19:28:40.105637] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:14.125 [2024-05-15 19:28:40.105648] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:14.125 [2024-05-15 19:28:40.105734] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:14.125 [2024-05-15 19:28:40.107673] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:14.125 73 Celsius) 00:13:14.125 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:14.125 Available Spare: 0% 00:13:14.125 Available Spare Threshold: 0% 00:13:14.126 Life Percentage Used: 0% 00:13:14.126 Data Units Read: 0 00:13:14.126 Data Units Written: 0 00:13:14.126 Host Read Commands: 0 00:13:14.126 Host Write Commands: 0 00:13:14.126 Controller Busy Time: 0 minutes 00:13:14.126 Power Cycles: 0 00:13:14.126 Power On Hours: 0 hours 00:13:14.126 Unsafe Shutdowns: 0 00:13:14.126 Unrecoverable Media Errors: 0 00:13:14.126 Lifetime Error Log Entries: 0 00:13:14.126 Warning Temperature Time: 0 minutes 00:13:14.126 Critical Temperature Time: 0 minutes 00:13:14.126 00:13:14.126 Number of Queues 00:13:14.126 ================ 00:13:14.126 Number of I/O Submission Queues: 127 00:13:14.126 Number of I/O Completion Queues: 127 00:13:14.126 00:13:14.126 Active Namespaces 00:13:14.126 ================= 00:13:14.126 Namespace ID:1 00:13:14.126 Error Recovery Timeout: Unlimited 00:13:14.126 Command Set Identifier: NVM (00h) 00:13:14.126 Deallocate: Supported 00:13:14.126 Deallocated/Unwritten Error: Not Supported 00:13:14.126 Deallocated Read Value: Unknown 00:13:14.126 Deallocate in Write Zeroes: Not Supported 00:13:14.126 Deallocated Guard Field: 0xFFFF 00:13:14.126 Flush: Supported 00:13:14.126 Reservation: Supported 00:13:14.126 Namespace Sharing Capabilities: Multiple Controllers 00:13:14.126 Size (in LBAs): 131072 (0GiB) 00:13:14.126 Capacity (in LBAs): 131072 (0GiB) 00:13:14.126 Utilization (in LBAs): 131072 (0GiB) 00:13:14.126 NGUID: 357941CAF325452AAD37C3C2D63AF1BC 00:13:14.126 UUID: 357941ca-f325-452a-ad37-c3c2d63af1bc 00:13:14.126 Thin Provisioning: Not Supported 00:13:14.126 Per-NS Atomic Units: Yes 00:13:14.126 Atomic Boundary Size (Normal): 0 00:13:14.126 Atomic Boundary Size (PFail): 0 00:13:14.126 Atomic Boundary Offset: 0 00:13:14.126 Maximum Single Source Range Length: 65535 00:13:14.126 Maximum Copy Length: 65535 00:13:14.126 Maximum Source Range Count: 1 00:13:14.126 NGUID/EUI64 Never Reused: No 00:13:14.126 Namespace Write Protected: No 00:13:14.126 Number of LBA Formats: 1 00:13:14.126 Current LBA Format: LBA Format #00 00:13:14.126 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:14.126 00:13:14.126 19:28:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:14.126 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.386 [2024-05-15 19:28:40.310032] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:19.672 Initializing NVMe Controllers 00:13:19.672 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:19.672 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:19.672 Initialization complete. Launching workers. 00:13:19.672 ======================================================== 00:13:19.672 Latency(us) 00:13:19.672 Device Information : IOPS MiB/s Average min max 00:13:19.672 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35004.95 136.74 3655.96 1194.95 7284.03 00:13:19.672 ======================================================== 00:13:19.672 Total : 35004.95 136.74 3655.96 1194.95 7284.03 00:13:19.672 00:13:19.672 [2024-05-15 19:28:45.329073] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:19.672 19:28:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:19.672 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.672 [2024-05-15 19:28:45.535117] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:24.957 Initializing NVMe Controllers 00:13:24.957 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:24.957 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:24.957 Initialization complete. Launching workers. 00:13:24.957 ======================================================== 00:13:24.957 Latency(us) 00:13:24.957 Device Information : IOPS MiB/s Average min max 00:13:24.957 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.55 4987.52 10976.20 00:13:24.957 ======================================================== 00:13:24.957 Total : 16051.20 62.70 7980.55 4987.52 10976.20 00:13:24.957 00:13:24.957 [2024-05-15 19:28:50.569553] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:24.957 19:28:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:24.957 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.957 [2024-05-15 19:28:50.802646] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:30.245 [2024-05-15 19:28:55.871522] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:30.245 Initializing NVMe Controllers 00:13:30.245 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:30.245 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:30.245 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:30.245 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:30.245 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:30.245 Initialization complete. Launching workers. 00:13:30.245 Starting thread on core 2 00:13:30.245 Starting thread on core 3 00:13:30.245 Starting thread on core 1 00:13:30.245 19:28:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:30.245 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.245 [2024-05-15 19:28:56.158714] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:33.544 [2024-05-15 19:28:59.468460] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:33.544 Initializing NVMe Controllers 00:13:33.544 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:33.544 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:33.544 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:33.544 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:33.544 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:33.544 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:33.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:33.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:33.544 Initialization complete. Launching workers. 00:13:33.544 Starting thread on core 1 with urgent priority queue 00:13:33.544 Starting thread on core 2 with urgent priority queue 00:13:33.544 Starting thread on core 3 with urgent priority queue 00:13:33.544 Starting thread on core 0 with urgent priority queue 00:13:33.544 SPDK bdev Controller (SPDK1 ) core 0: 10718.00 IO/s 9.33 secs/100000 ios 00:13:33.544 SPDK bdev Controller (SPDK1 ) core 1: 7588.33 IO/s 13.18 secs/100000 ios 00:13:33.544 SPDK bdev Controller (SPDK1 ) core 2: 9758.00 IO/s 10.25 secs/100000 ios 00:13:33.544 SPDK bdev Controller (SPDK1 ) core 3: 7289.33 IO/s 13.72 secs/100000 ios 00:13:33.545 ======================================================== 00:13:33.545 00:13:33.545 19:28:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:33.545 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.805 [2024-05-15 19:28:59.737808] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:33.805 Initializing NVMe Controllers 00:13:33.805 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:33.805 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:33.805 Namespace ID: 1 size: 0GB 00:13:33.805 Initialization complete. 00:13:33.805 INFO: using host memory buffer for IO 00:13:33.805 Hello world! 00:13:33.805 [2024-05-15 19:28:59.771029] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:33.805 19:28:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:33.805 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.066 [2024-05-15 19:29:00.042812] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:35.005 Initializing NVMe Controllers 00:13:35.005 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:35.005 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:35.005 Initialization complete. Launching workers. 00:13:35.005 submit (in ns) avg, min, max = 7793.0, 3931.7, 4000490.0 00:13:35.005 complete (in ns) avg, min, max = 18967.4, 2406.7, 3999835.0 00:13:35.005 00:13:35.005 Submit histogram 00:13:35.005 ================ 00:13:35.005 Range in us Cumulative Count 00:13:35.005 3.920 - 3.947: 0.3601% ( 70) 00:13:35.005 3.947 - 3.973: 3.3846% ( 588) 00:13:35.005 3.973 - 4.000: 11.1157% ( 1503) 00:13:35.005 4.000 - 4.027: 22.4114% ( 2196) 00:13:35.005 4.027 - 4.053: 34.0929% ( 2271) 00:13:35.005 4.053 - 4.080: 45.8001% ( 2276) 00:13:35.005 4.080 - 4.107: 61.3394% ( 3021) 00:13:35.005 4.107 - 4.133: 77.0022% ( 3045) 00:13:35.005 4.133 - 4.160: 87.9070% ( 2120) 00:13:35.005 4.160 - 4.187: 94.4653% ( 1275) 00:13:35.005 4.187 - 4.213: 97.5670% ( 603) 00:13:35.005 4.213 - 4.240: 98.7398% ( 228) 00:13:35.005 4.240 - 4.267: 99.2439% ( 98) 00:13:35.005 4.267 - 4.293: 99.4085% ( 32) 00:13:35.005 4.293 - 4.320: 99.4753% ( 13) 00:13:35.005 4.320 - 4.347: 99.4908% ( 3) 00:13:35.005 4.347 - 4.373: 99.5011% ( 2) 00:13:35.005 4.400 - 4.427: 99.5113% ( 2) 00:13:35.005 4.533 - 4.560: 99.5165% ( 1) 00:13:35.005 4.560 - 4.587: 99.5216% ( 1) 00:13:35.005 4.587 - 4.613: 99.5268% ( 1) 00:13:35.005 4.640 - 4.667: 99.5319% ( 1) 00:13:35.005 4.800 - 4.827: 99.5371% ( 1) 00:13:35.005 4.827 - 4.853: 99.5422% ( 1) 00:13:35.005 4.853 - 4.880: 99.5473% ( 1) 00:13:35.005 5.040 - 5.067: 99.5525% ( 1) 00:13:35.005 5.093 - 5.120: 99.5576% ( 1) 00:13:35.005 5.147 - 5.173: 99.5628% ( 1) 00:13:35.006 5.333 - 5.360: 99.5782% ( 3) 00:13:35.006 5.680 - 5.707: 99.5834% ( 1) 00:13:35.006 5.840 - 5.867: 99.5885% ( 1) 00:13:35.006 5.920 - 5.947: 99.5936% ( 1) 00:13:35.006 6.080 - 6.107: 99.5988% ( 1) 00:13:35.006 6.107 - 6.133: 99.6039% ( 1) 00:13:35.006 6.133 - 6.160: 99.6091% ( 1) 00:13:35.006 6.213 - 6.240: 99.6194% ( 2) 00:13:35.006 6.240 - 6.267: 99.6296% ( 2) 00:13:35.006 6.267 - 6.293: 99.6348% ( 1) 00:13:35.006 6.320 - 6.347: 99.6399% ( 1) 00:13:35.006 6.373 - 6.400: 99.6451% ( 1) 00:13:35.006 6.400 - 6.427: 99.6605% ( 3) 00:13:35.006 6.427 - 6.453: 99.6759% ( 3) 00:13:35.006 6.480 - 6.507: 99.6862% ( 2) 00:13:35.006 6.507 - 6.533: 99.6914% ( 1) 00:13:35.006 6.533 - 6.560: 99.7017% ( 2) 00:13:35.006 6.587 - 6.613: 99.7068% ( 1) 00:13:35.006 6.667 - 6.693: 99.7119% ( 1) 00:13:35.006 6.720 - 6.747: 99.7171% ( 1) 00:13:35.006 6.773 - 6.800: 99.7222% ( 1) 00:13:35.006 6.800 - 6.827: 99.7274% ( 1) 00:13:35.006 6.827 - 6.880: 99.7428% ( 3) 00:13:35.006 6.880 - 6.933: 99.7480% ( 1) 00:13:35.006 6.933 - 6.987: 99.7685% ( 4) 00:13:35.006 6.987 - 7.040: 99.7737% ( 1) 00:13:35.006 7.093 - 7.147: 99.7788% ( 1) 00:13:35.006 7.147 - 7.200: 99.7891% ( 2) 00:13:35.006 7.253 - 7.307: 99.7942% ( 1) 00:13:35.006 7.360 - 7.413: 99.7994% ( 1) 00:13:35.006 7.413 - 7.467: 99.8097% ( 2) 00:13:35.006 7.467 - 7.520: 99.8148% ( 1) 00:13:35.006 7.520 - 7.573: 99.8200% ( 1) 00:13:35.006 7.627 - 7.680: 99.8251% ( 1) 00:13:35.006 7.680 - 7.733: 99.8303% ( 1) 00:13:35.006 7.733 - 7.787: 99.8354% ( 1) 00:13:35.006 7.787 - 7.840: 99.8560% ( 4) 00:13:35.006 8.053 - 8.107: 99.8611% ( 1) 00:13:35.006 8.213 - 8.267: 99.8663% ( 1) 00:13:35.006 8.267 - 8.320: 99.8714% ( 1) 00:13:35.006 8.427 - 8.480: 99.8765% ( 1) 00:13:35.006 8.480 - 8.533: 99.8868% ( 2) 00:13:35.006 8.587 - 8.640: 99.8920% ( 1) 00:13:35.006 9.653 - 9.707: 99.8971% ( 1) 00:13:35.006 13.333 - 13.387: 99.9023% ( 1) 00:13:35.006 18.133 - 18.240: 99.9074% ( 1) 00:13:35.006 3986.773 - 4014.080: 100.0000% ( 18) 00:13:35.006 00:13:35.006 Complete histogram 00:13:35.006 ================== 00:13:35.006 Range in us Cumulative Count 00:13:35.006 2.400 - 2.413: 0.0154% ( 3) 00:13:35.006 2.413 - 2.427: 0.1235% ( 21) 00:13:35.006 2.427 - [2024-05-15 19:29:01.066077] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:35.006 2.440: 1.0699% ( 184) 00:13:35.006 2.440 - 2.453: 1.1676% ( 19) 00:13:35.006 2.453 - 2.467: 1.3477% ( 35) 00:13:35.006 2.467 - 2.480: 1.4145% ( 13) 00:13:35.006 2.480 - 2.493: 3.1068% ( 329) 00:13:35.006 2.493 - 2.507: 41.9731% ( 7556) 00:13:35.006 2.507 - 2.520: 56.9827% ( 2918) 00:13:35.006 2.520 - 2.533: 69.4512% ( 2424) 00:13:35.006 2.533 - 2.547: 77.8509% ( 1633) 00:13:35.006 2.547 - 2.560: 81.6573% ( 740) 00:13:35.006 2.560 - 2.573: 85.5563% ( 758) 00:13:35.006 2.573 - 2.587: 91.0344% ( 1065) 00:13:35.006 2.587 - 2.600: 94.7688% ( 726) 00:13:35.006 2.600 - 2.613: 97.0115% ( 436) 00:13:35.006 2.613 - 2.627: 98.6318% ( 315) 00:13:35.006 2.627 - 2.640: 99.1873% ( 108) 00:13:35.006 2.640 - 2.653: 99.3365% ( 29) 00:13:35.006 2.653 - 2.667: 99.3827% ( 9) 00:13:35.006 2.667 - 2.680: 99.3982% ( 3) 00:13:35.006 2.680 - 2.693: 99.4033% ( 1) 00:13:35.006 2.693 - 2.707: 99.4085% ( 1) 00:13:35.006 4.400 - 4.427: 99.4136% ( 1) 00:13:35.006 4.560 - 4.587: 99.4188% ( 1) 00:13:35.006 4.613 - 4.640: 99.4239% ( 1) 00:13:35.006 4.640 - 4.667: 99.4290% ( 1) 00:13:35.006 4.933 - 4.960: 99.4393% ( 2) 00:13:35.006 4.960 - 4.987: 99.4445% ( 1) 00:13:35.006 4.987 - 5.013: 99.4496% ( 1) 00:13:35.006 5.013 - 5.040: 99.4599% ( 2) 00:13:35.006 5.120 - 5.147: 99.4702% ( 2) 00:13:35.006 5.147 - 5.173: 99.4753% ( 1) 00:13:35.006 5.253 - 5.280: 99.4856% ( 2) 00:13:35.006 5.333 - 5.360: 99.4959% ( 2) 00:13:35.006 5.360 - 5.387: 99.5011% ( 1) 00:13:35.006 5.413 - 5.440: 99.5062% ( 1) 00:13:35.006 5.573 - 5.600: 99.5165% ( 2) 00:13:35.006 5.627 - 5.653: 99.5268% ( 2) 00:13:35.006 5.760 - 5.787: 99.5371% ( 2) 00:13:35.006 5.920 - 5.947: 99.5422% ( 1) 00:13:35.006 6.053 - 6.080: 99.5473% ( 1) 00:13:35.006 6.187 - 6.213: 99.5525% ( 1) 00:13:35.006 6.320 - 6.347: 99.5576% ( 1) 00:13:35.006 6.480 - 6.507: 99.5628% ( 1) 00:13:35.006 6.987 - 7.040: 99.5679% ( 1) 00:13:35.006 10.667 - 10.720: 99.5731% ( 1) 00:13:35.006 11.413 - 11.467: 99.5782% ( 1) 00:13:35.006 12.213 - 12.267: 99.5834% ( 1) 00:13:35.006 15.467 - 15.573: 99.5885% ( 1) 00:13:35.006 3932.160 - 3959.467: 99.5936% ( 1) 00:13:35.006 3986.773 - 4014.080: 100.0000% ( 79) 00:13:35.006 00:13:35.006 19:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:35.006 19:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:35.006 19:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:35.006 19:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:35.006 19:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:35.266 [ 00:13:35.266 { 00:13:35.266 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:35.266 "subtype": "Discovery", 00:13:35.266 "listen_addresses": [], 00:13:35.266 "allow_any_host": true, 00:13:35.266 "hosts": [] 00:13:35.266 }, 00:13:35.266 { 00:13:35.266 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:35.266 "subtype": "NVMe", 00:13:35.266 "listen_addresses": [ 00:13:35.266 { 00:13:35.266 "trtype": "VFIOUSER", 00:13:35.266 "adrfam": "IPv4", 00:13:35.266 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:35.266 "trsvcid": "0" 00:13:35.266 } 00:13:35.266 ], 00:13:35.266 "allow_any_host": true, 00:13:35.266 "hosts": [], 00:13:35.266 "serial_number": "SPDK1", 00:13:35.266 "model_number": "SPDK bdev Controller", 00:13:35.266 "max_namespaces": 32, 00:13:35.266 "min_cntlid": 1, 00:13:35.266 "max_cntlid": 65519, 00:13:35.266 "namespaces": [ 00:13:35.266 { 00:13:35.266 "nsid": 1, 00:13:35.266 "bdev_name": "Malloc1", 00:13:35.266 "name": "Malloc1", 00:13:35.266 "nguid": "357941CAF325452AAD37C3C2D63AF1BC", 00:13:35.266 "uuid": "357941ca-f325-452a-ad37-c3c2d63af1bc" 00:13:35.266 } 00:13:35.266 ] 00:13:35.266 }, 00:13:35.266 { 00:13:35.266 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:35.266 "subtype": "NVMe", 00:13:35.266 "listen_addresses": [ 00:13:35.266 { 00:13:35.266 "trtype": "VFIOUSER", 00:13:35.266 "adrfam": "IPv4", 00:13:35.266 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:35.266 "trsvcid": "0" 00:13:35.266 } 00:13:35.266 ], 00:13:35.266 "allow_any_host": true, 00:13:35.266 "hosts": [], 00:13:35.266 "serial_number": "SPDK2", 00:13:35.266 "model_number": "SPDK bdev Controller", 00:13:35.266 "max_namespaces": 32, 00:13:35.266 "min_cntlid": 1, 00:13:35.266 "max_cntlid": 65519, 00:13:35.266 "namespaces": [ 00:13:35.266 { 00:13:35.266 "nsid": 1, 00:13:35.266 "bdev_name": "Malloc2", 00:13:35.266 "name": "Malloc2", 00:13:35.266 "nguid": "900F964098D14D80B11D595DBD365723", 00:13:35.266 "uuid": "900f9640-98d1-4d80-b11d-595dbd365723" 00:13:35.266 } 00:13:35.266 ] 00:13:35.266 } 00:13:35.266 ] 00:13:35.266 19:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:35.266 19:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:35.266 19:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3502749 00:13:35.266 19:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:35.266 19:29:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:13:35.266 19:29:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:35.266 19:29:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:35.266 19:29:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:13:35.266 19:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:35.266 19:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:35.266 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.527 [2024-05-15 19:29:01.500558] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:35.527 Malloc3 00:13:35.527 19:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:35.789 [2024-05-15 19:29:01.758698] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:35.789 19:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:35.789 Asynchronous Event Request test 00:13:35.789 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:35.789 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:35.789 Registering asynchronous event callbacks... 00:13:35.789 Starting namespace attribute notice tests for all controllers... 00:13:35.789 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:35.789 aer_cb - Changed Namespace 00:13:35.789 Cleaning up... 00:13:35.789 [ 00:13:35.789 { 00:13:35.789 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:35.789 "subtype": "Discovery", 00:13:35.789 "listen_addresses": [], 00:13:35.789 "allow_any_host": true, 00:13:35.789 "hosts": [] 00:13:35.789 }, 00:13:35.789 { 00:13:35.789 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:35.789 "subtype": "NVMe", 00:13:35.789 "listen_addresses": [ 00:13:35.789 { 00:13:35.789 "trtype": "VFIOUSER", 00:13:35.789 "adrfam": "IPv4", 00:13:35.789 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:35.789 "trsvcid": "0" 00:13:35.789 } 00:13:35.789 ], 00:13:35.789 "allow_any_host": true, 00:13:35.789 "hosts": [], 00:13:35.789 "serial_number": "SPDK1", 00:13:35.789 "model_number": "SPDK bdev Controller", 00:13:35.789 "max_namespaces": 32, 00:13:35.789 "min_cntlid": 1, 00:13:35.789 "max_cntlid": 65519, 00:13:35.789 "namespaces": [ 00:13:35.789 { 00:13:35.789 "nsid": 1, 00:13:35.789 "bdev_name": "Malloc1", 00:13:35.789 "name": "Malloc1", 00:13:35.789 "nguid": "357941CAF325452AAD37C3C2D63AF1BC", 00:13:35.789 "uuid": "357941ca-f325-452a-ad37-c3c2d63af1bc" 00:13:35.789 }, 00:13:35.789 { 00:13:35.789 "nsid": 2, 00:13:35.789 "bdev_name": "Malloc3", 00:13:35.789 "name": "Malloc3", 00:13:35.789 "nguid": "EAE4CCF75A214480B54FE9D51F83FA13", 00:13:35.789 "uuid": "eae4ccf7-5a21-4480-b54f-e9d51f83fa13" 00:13:35.789 } 00:13:35.789 ] 00:13:35.789 }, 00:13:35.789 { 00:13:35.789 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:35.789 "subtype": "NVMe", 00:13:35.789 "listen_addresses": [ 00:13:35.789 { 00:13:35.789 "trtype": "VFIOUSER", 00:13:35.789 "adrfam": "IPv4", 00:13:35.789 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:35.789 "trsvcid": "0" 00:13:35.789 } 00:13:35.789 ], 00:13:35.789 "allow_any_host": true, 00:13:35.789 "hosts": [], 00:13:35.789 "serial_number": "SPDK2", 00:13:35.789 "model_number": "SPDK bdev Controller", 00:13:35.789 "max_namespaces": 32, 00:13:35.789 "min_cntlid": 1, 00:13:35.789 "max_cntlid": 65519, 00:13:35.789 "namespaces": [ 00:13:35.789 { 00:13:35.789 "nsid": 1, 00:13:35.789 "bdev_name": "Malloc2", 00:13:35.789 "name": "Malloc2", 00:13:35.789 "nguid": "900F964098D14D80B11D595DBD365723", 00:13:35.789 "uuid": "900f9640-98d1-4d80-b11d-595dbd365723" 00:13:35.789 } 00:13:35.789 ] 00:13:35.789 } 00:13:35.789 ] 00:13:36.052 19:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3502749 00:13:36.052 19:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:36.052 19:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:36.052 19:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:36.052 19:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:36.052 [2024-05-15 19:29:02.023474] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:13:36.052 [2024-05-15 19:29:02.023516] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3503015 ] 00:13:36.053 EAL: No free 2048 kB hugepages reported on node 1 00:13:36.053 [2024-05-15 19:29:02.055869] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:36.053 [2024-05-15 19:29:02.064562] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:36.053 [2024-05-15 19:29:02.064584] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f860d1a7000 00:13:36.053 [2024-05-15 19:29:02.065567] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:36.053 [2024-05-15 19:29:02.066575] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:36.053 [2024-05-15 19:29:02.067584] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:36.053 [2024-05-15 19:29:02.068593] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:36.053 [2024-05-15 19:29:02.069596] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:36.053 [2024-05-15 19:29:02.070602] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:36.053 [2024-05-15 19:29:02.071610] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:36.053 [2024-05-15 19:29:02.072617] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:36.053 [2024-05-15 19:29:02.073625] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:36.053 [2024-05-15 19:29:02.073642] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f860d19c000 00:13:36.053 [2024-05-15 19:29:02.074969] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:36.053 [2024-05-15 19:29:02.091176] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:36.053 [2024-05-15 19:29:02.091201] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:36.053 [2024-05-15 19:29:02.096286] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:36.053 [2024-05-15 19:29:02.096334] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:36.053 [2024-05-15 19:29:02.096414] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:36.053 [2024-05-15 19:29:02.096427] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:36.053 [2024-05-15 19:29:02.096433] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:36.053 [2024-05-15 19:29:02.097297] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:36.053 [2024-05-15 19:29:02.097307] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:36.053 [2024-05-15 19:29:02.097318] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:36.053 [2024-05-15 19:29:02.098302] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:36.053 [2024-05-15 19:29:02.098311] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:36.053 [2024-05-15 19:29:02.098321] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:36.053 [2024-05-15 19:29:02.099306] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:36.053 [2024-05-15 19:29:02.099319] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:36.053 [2024-05-15 19:29:02.100319] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:36.053 [2024-05-15 19:29:02.100327] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:36.053 [2024-05-15 19:29:02.100332] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:36.053 [2024-05-15 19:29:02.100339] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:36.053 [2024-05-15 19:29:02.100444] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:36.053 [2024-05-15 19:29:02.100449] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:36.053 [2024-05-15 19:29:02.100453] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:36.053 [2024-05-15 19:29:02.101330] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:36.053 [2024-05-15 19:29:02.102335] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:36.053 [2024-05-15 19:29:02.103348] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:36.053 [2024-05-15 19:29:02.104350] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:36.053 [2024-05-15 19:29:02.104391] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:36.053 [2024-05-15 19:29:02.105365] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:36.053 [2024-05-15 19:29:02.105374] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:36.053 [2024-05-15 19:29:02.105379] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:36.053 [2024-05-15 19:29:02.105400] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:36.053 [2024-05-15 19:29:02.105411] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:36.053 [2024-05-15 19:29:02.105425] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:36.053 [2024-05-15 19:29:02.105430] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:36.053 [2024-05-15 19:29:02.105442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:36.053 [2024-05-15 19:29:02.114321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:36.053 [2024-05-15 19:29:02.114333] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:36.053 [2024-05-15 19:29:02.114338] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:36.053 [2024-05-15 19:29:02.114342] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:36.053 [2024-05-15 19:29:02.114347] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:36.053 [2024-05-15 19:29:02.114351] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:36.053 [2024-05-15 19:29:02.114356] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:36.053 [2024-05-15 19:29:02.114360] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:36.053 [2024-05-15 19:29:02.114370] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:36.053 [2024-05-15 19:29:02.114382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:36.053 [2024-05-15 19:29:02.122318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:36.053 [2024-05-15 19:29:02.122333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.053 [2024-05-15 19:29:02.122342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.053 [2024-05-15 19:29:02.122353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.053 [2024-05-15 19:29:02.122361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.053 [2024-05-15 19:29:02.122366] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:36.053 [2024-05-15 19:29:02.122373] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:36.053 [2024-05-15 19:29:02.122382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:36.053 [2024-05-15 19:29:02.130319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:36.053 [2024-05-15 19:29:02.130327] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:36.053 [2024-05-15 19:29:02.130334] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:36.053 [2024-05-15 19:29:02.130341] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:36.053 [2024-05-15 19:29:02.130346] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:36.053 [2024-05-15 19:29:02.130355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:36.053 [2024-05-15 19:29:02.138321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:36.053 [2024-05-15 19:29:02.138374] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:36.053 [2024-05-15 19:29:02.138382] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:36.053 [2024-05-15 19:29:02.138389] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:36.053 [2024-05-15 19:29:02.138393] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:36.053 [2024-05-15 19:29:02.138400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:36.053 [2024-05-15 19:29:02.146319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:36.053 [2024-05-15 19:29:02.146332] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:36.054 [2024-05-15 19:29:02.146340] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:36.054 [2024-05-15 19:29:02.146347] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:36.054 [2024-05-15 19:29:02.146354] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:36.054 [2024-05-15 19:29:02.146358] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:36.054 [2024-05-15 19:29:02.146364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:36.054 [2024-05-15 19:29:02.154320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:36.054 [2024-05-15 19:29:02.154334] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:36.054 [2024-05-15 19:29:02.154341] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:36.054 [2024-05-15 19:29:02.154348] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:36.054 [2024-05-15 19:29:02.154352] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:36.054 [2024-05-15 19:29:02.154358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:36.054 [2024-05-15 19:29:02.162319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:36.054 [2024-05-15 19:29:02.162333] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:36.054 [2024-05-15 19:29:02.162339] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:36.054 [2024-05-15 19:29:02.162346] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:36.054 [2024-05-15 19:29:02.162352] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:36.054 [2024-05-15 19:29:02.162357] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:36.054 [2024-05-15 19:29:02.162362] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:36.054 [2024-05-15 19:29:02.162367] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:36.054 [2024-05-15 19:29:02.162371] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:36.054 [2024-05-15 19:29:02.162389] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:36.054 [2024-05-15 19:29:02.170318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:36.054 [2024-05-15 19:29:02.170331] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:36.054 [2024-05-15 19:29:02.178320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:36.054 [2024-05-15 19:29:02.178333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:36.054 [2024-05-15 19:29:02.186318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:36.054 [2024-05-15 19:29:02.186341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:36.054 [2024-05-15 19:29:02.194318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:36.054 [2024-05-15 19:29:02.194330] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:36.054 [2024-05-15 19:29:02.194335] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:36.054 [2024-05-15 19:29:02.194338] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:36.054 [2024-05-15 19:29:02.194341] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:36.054 [2024-05-15 19:29:02.194348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:36.054 [2024-05-15 19:29:02.194357] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:36.054 [2024-05-15 19:29:02.194362] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:36.054 [2024-05-15 19:29:02.194368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:36.054 [2024-05-15 19:29:02.194375] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:36.054 [2024-05-15 19:29:02.194379] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:36.054 [2024-05-15 19:29:02.194385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:36.054 [2024-05-15 19:29:02.194394] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:36.054 [2024-05-15 19:29:02.194399] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:36.054 [2024-05-15 19:29:02.194405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:36.054 [2024-05-15 19:29:02.202319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:36.054 [2024-05-15 19:29:02.202333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:36.054 [2024-05-15 19:29:02.202342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:36.054 [2024-05-15 19:29:02.202351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:36.054 ===================================================== 00:13:36.054 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:36.054 ===================================================== 00:13:36.054 Controller Capabilities/Features 00:13:36.054 ================================ 00:13:36.054 Vendor ID: 4e58 00:13:36.054 Subsystem Vendor ID: 4e58 00:13:36.054 Serial Number: SPDK2 00:13:36.054 Model Number: SPDK bdev Controller 00:13:36.054 Firmware Version: 24.05 00:13:36.054 Recommended Arb Burst: 6 00:13:36.054 IEEE OUI Identifier: 8d 6b 50 00:13:36.054 Multi-path I/O 00:13:36.054 May have multiple subsystem ports: Yes 00:13:36.054 May have multiple controllers: Yes 00:13:36.054 Associated with SR-IOV VF: No 00:13:36.054 Max Data Transfer Size: 131072 00:13:36.054 Max Number of Namespaces: 32 00:13:36.054 Max Number of I/O Queues: 127 00:13:36.054 NVMe Specification Version (VS): 1.3 00:13:36.054 NVMe Specification Version (Identify): 1.3 00:13:36.054 Maximum Queue Entries: 256 00:13:36.054 Contiguous Queues Required: Yes 00:13:36.054 Arbitration Mechanisms Supported 00:13:36.054 Weighted Round Robin: Not Supported 00:13:36.054 Vendor Specific: Not Supported 00:13:36.054 Reset Timeout: 15000 ms 00:13:36.054 Doorbell Stride: 4 bytes 00:13:36.054 NVM Subsystem Reset: Not Supported 00:13:36.054 Command Sets Supported 00:13:36.054 NVM Command Set: Supported 00:13:36.054 Boot Partition: Not Supported 00:13:36.054 Memory Page Size Minimum: 4096 bytes 00:13:36.054 Memory Page Size Maximum: 4096 bytes 00:13:36.054 Persistent Memory Region: Not Supported 00:13:36.054 Optional Asynchronous Events Supported 00:13:36.054 Namespace Attribute Notices: Supported 00:13:36.054 Firmware Activation Notices: Not Supported 00:13:36.054 ANA Change Notices: Not Supported 00:13:36.054 PLE Aggregate Log Change Notices: Not Supported 00:13:36.054 LBA Status Info Alert Notices: Not Supported 00:13:36.054 EGE Aggregate Log Change Notices: Not Supported 00:13:36.054 Normal NVM Subsystem Shutdown event: Not Supported 00:13:36.054 Zone Descriptor Change Notices: Not Supported 00:13:36.054 Discovery Log Change Notices: Not Supported 00:13:36.054 Controller Attributes 00:13:36.054 128-bit Host Identifier: Supported 00:13:36.054 Non-Operational Permissive Mode: Not Supported 00:13:36.054 NVM Sets: Not Supported 00:13:36.054 Read Recovery Levels: Not Supported 00:13:36.054 Endurance Groups: Not Supported 00:13:36.054 Predictable Latency Mode: Not Supported 00:13:36.054 Traffic Based Keep ALive: Not Supported 00:13:36.054 Namespace Granularity: Not Supported 00:13:36.054 SQ Associations: Not Supported 00:13:36.054 UUID List: Not Supported 00:13:36.054 Multi-Domain Subsystem: Not Supported 00:13:36.054 Fixed Capacity Management: Not Supported 00:13:36.054 Variable Capacity Management: Not Supported 00:13:36.054 Delete Endurance Group: Not Supported 00:13:36.054 Delete NVM Set: Not Supported 00:13:36.054 Extended LBA Formats Supported: Not Supported 00:13:36.054 Flexible Data Placement Supported: Not Supported 00:13:36.054 00:13:36.054 Controller Memory Buffer Support 00:13:36.054 ================================ 00:13:36.054 Supported: No 00:13:36.054 00:13:36.054 Persistent Memory Region Support 00:13:36.054 ================================ 00:13:36.054 Supported: No 00:13:36.054 00:13:36.054 Admin Command Set Attributes 00:13:36.054 ============================ 00:13:36.054 Security Send/Receive: Not Supported 00:13:36.054 Format NVM: Not Supported 00:13:36.054 Firmware Activate/Download: Not Supported 00:13:36.054 Namespace Management: Not Supported 00:13:36.054 Device Self-Test: Not Supported 00:13:36.054 Directives: Not Supported 00:13:36.054 NVMe-MI: Not Supported 00:13:36.054 Virtualization Management: Not Supported 00:13:36.054 Doorbell Buffer Config: Not Supported 00:13:36.054 Get LBA Status Capability: Not Supported 00:13:36.054 Command & Feature Lockdown Capability: Not Supported 00:13:36.054 Abort Command Limit: 4 00:13:36.055 Async Event Request Limit: 4 00:13:36.055 Number of Firmware Slots: N/A 00:13:36.055 Firmware Slot 1 Read-Only: N/A 00:13:36.055 Firmware Activation Without Reset: N/A 00:13:36.055 Multiple Update Detection Support: N/A 00:13:36.055 Firmware Update Granularity: No Information Provided 00:13:36.055 Per-Namespace SMART Log: No 00:13:36.055 Asymmetric Namespace Access Log Page: Not Supported 00:13:36.055 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:36.055 Command Effects Log Page: Supported 00:13:36.055 Get Log Page Extended Data: Supported 00:13:36.055 Telemetry Log Pages: Not Supported 00:13:36.055 Persistent Event Log Pages: Not Supported 00:13:36.055 Supported Log Pages Log Page: May Support 00:13:36.055 Commands Supported & Effects Log Page: Not Supported 00:13:36.055 Feature Identifiers & Effects Log Page:May Support 00:13:36.055 NVMe-MI Commands & Effects Log Page: May Support 00:13:36.055 Data Area 4 for Telemetry Log: Not Supported 00:13:36.055 Error Log Page Entries Supported: 128 00:13:36.055 Keep Alive: Supported 00:13:36.055 Keep Alive Granularity: 10000 ms 00:13:36.055 00:13:36.055 NVM Command Set Attributes 00:13:36.055 ========================== 00:13:36.055 Submission Queue Entry Size 00:13:36.055 Max: 64 00:13:36.055 Min: 64 00:13:36.055 Completion Queue Entry Size 00:13:36.055 Max: 16 00:13:36.055 Min: 16 00:13:36.055 Number of Namespaces: 32 00:13:36.055 Compare Command: Supported 00:13:36.055 Write Uncorrectable Command: Not Supported 00:13:36.055 Dataset Management Command: Supported 00:13:36.055 Write Zeroes Command: Supported 00:13:36.055 Set Features Save Field: Not Supported 00:13:36.055 Reservations: Not Supported 00:13:36.055 Timestamp: Not Supported 00:13:36.055 Copy: Supported 00:13:36.055 Volatile Write Cache: Present 00:13:36.055 Atomic Write Unit (Normal): 1 00:13:36.055 Atomic Write Unit (PFail): 1 00:13:36.055 Atomic Compare & Write Unit: 1 00:13:36.055 Fused Compare & Write: Supported 00:13:36.055 Scatter-Gather List 00:13:36.055 SGL Command Set: Supported (Dword aligned) 00:13:36.055 SGL Keyed: Not Supported 00:13:36.055 SGL Bit Bucket Descriptor: Not Supported 00:13:36.055 SGL Metadata Pointer: Not Supported 00:13:36.055 Oversized SGL: Not Supported 00:13:36.055 SGL Metadata Address: Not Supported 00:13:36.055 SGL Offset: Not Supported 00:13:36.055 Transport SGL Data Block: Not Supported 00:13:36.055 Replay Protected Memory Block: Not Supported 00:13:36.055 00:13:36.055 Firmware Slot Information 00:13:36.055 ========================= 00:13:36.055 Active slot: 1 00:13:36.055 Slot 1 Firmware Revision: 24.05 00:13:36.055 00:13:36.055 00:13:36.055 Commands Supported and Effects 00:13:36.055 ============================== 00:13:36.055 Admin Commands 00:13:36.055 -------------- 00:13:36.055 Get Log Page (02h): Supported 00:13:36.055 Identify (06h): Supported 00:13:36.055 Abort (08h): Supported 00:13:36.055 Set Features (09h): Supported 00:13:36.055 Get Features (0Ah): Supported 00:13:36.055 Asynchronous Event Request (0Ch): Supported 00:13:36.055 Keep Alive (18h): Supported 00:13:36.055 I/O Commands 00:13:36.055 ------------ 00:13:36.055 Flush (00h): Supported LBA-Change 00:13:36.055 Write (01h): Supported LBA-Change 00:13:36.055 Read (02h): Supported 00:13:36.055 Compare (05h): Supported 00:13:36.055 Write Zeroes (08h): Supported LBA-Change 00:13:36.055 Dataset Management (09h): Supported LBA-Change 00:13:36.055 Copy (19h): Supported LBA-Change 00:13:36.055 Unknown (79h): Supported LBA-Change 00:13:36.055 Unknown (7Ah): Supported 00:13:36.055 00:13:36.055 Error Log 00:13:36.055 ========= 00:13:36.055 00:13:36.055 Arbitration 00:13:36.055 =========== 00:13:36.055 Arbitration Burst: 1 00:13:36.055 00:13:36.055 Power Management 00:13:36.055 ================ 00:13:36.055 Number of Power States: 1 00:13:36.055 Current Power State: Power State #0 00:13:36.055 Power State #0: 00:13:36.055 Max Power: 0.00 W 00:13:36.055 Non-Operational State: Operational 00:13:36.055 Entry Latency: Not Reported 00:13:36.055 Exit Latency: Not Reported 00:13:36.055 Relative Read Throughput: 0 00:13:36.055 Relative Read Latency: 0 00:13:36.055 Relative Write Throughput: 0 00:13:36.055 Relative Write Latency: 0 00:13:36.055 Idle Power: Not Reported 00:13:36.055 Active Power: Not Reported 00:13:36.055 Non-Operational Permissive Mode: Not Supported 00:13:36.055 00:13:36.055 Health Information 00:13:36.055 ================== 00:13:36.055 Critical Warnings: 00:13:36.055 Available Spare Space: OK 00:13:36.055 Temperature: OK 00:13:36.055 Device Reliability: OK 00:13:36.055 Read Only: No 00:13:36.055 Volatile Memory Backup: OK 00:13:36.055 Current Temperature: 0 Kelvin (-2[2024-05-15 19:29:02.202453] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:36.055 [2024-05-15 19:29:02.210320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:36.055 [2024-05-15 19:29:02.210348] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:36.055 [2024-05-15 19:29:02.210357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.055 [2024-05-15 19:29:02.210363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.055 [2024-05-15 19:29:02.210370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.055 [2024-05-15 19:29:02.210376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.055 [2024-05-15 19:29:02.210427] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:36.055 [2024-05-15 19:29:02.210438] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:36.055 [2024-05-15 19:29:02.211429] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:36.055 [2024-05-15 19:29:02.211478] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:36.055 [2024-05-15 19:29:02.211484] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:36.055 [2024-05-15 19:29:02.212431] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:36.055 [2024-05-15 19:29:02.212445] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:36.055 [2024-05-15 19:29:02.212493] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:36.055 [2024-05-15 19:29:02.213871] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:36.316 73 Celsius) 00:13:36.316 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:36.316 Available Spare: 0% 00:13:36.316 Available Spare Threshold: 0% 00:13:36.316 Life Percentage Used: 0% 00:13:36.316 Data Units Read: 0 00:13:36.316 Data Units Written: 0 00:13:36.316 Host Read Commands: 0 00:13:36.316 Host Write Commands: 0 00:13:36.316 Controller Busy Time: 0 minutes 00:13:36.316 Power Cycles: 0 00:13:36.316 Power On Hours: 0 hours 00:13:36.316 Unsafe Shutdowns: 0 00:13:36.316 Unrecoverable Media Errors: 0 00:13:36.316 Lifetime Error Log Entries: 0 00:13:36.316 Warning Temperature Time: 0 minutes 00:13:36.316 Critical Temperature Time: 0 minutes 00:13:36.316 00:13:36.316 Number of Queues 00:13:36.316 ================ 00:13:36.316 Number of I/O Submission Queues: 127 00:13:36.316 Number of I/O Completion Queues: 127 00:13:36.316 00:13:36.316 Active Namespaces 00:13:36.316 ================= 00:13:36.316 Namespace ID:1 00:13:36.316 Error Recovery Timeout: Unlimited 00:13:36.316 Command Set Identifier: NVM (00h) 00:13:36.316 Deallocate: Supported 00:13:36.316 Deallocated/Unwritten Error: Not Supported 00:13:36.316 Deallocated Read Value: Unknown 00:13:36.316 Deallocate in Write Zeroes: Not Supported 00:13:36.316 Deallocated Guard Field: 0xFFFF 00:13:36.316 Flush: Supported 00:13:36.316 Reservation: Supported 00:13:36.316 Namespace Sharing Capabilities: Multiple Controllers 00:13:36.316 Size (in LBAs): 131072 (0GiB) 00:13:36.316 Capacity (in LBAs): 131072 (0GiB) 00:13:36.316 Utilization (in LBAs): 131072 (0GiB) 00:13:36.316 NGUID: 900F964098D14D80B11D595DBD365723 00:13:36.316 UUID: 900f9640-98d1-4d80-b11d-595dbd365723 00:13:36.316 Thin Provisioning: Not Supported 00:13:36.316 Per-NS Atomic Units: Yes 00:13:36.316 Atomic Boundary Size (Normal): 0 00:13:36.316 Atomic Boundary Size (PFail): 0 00:13:36.316 Atomic Boundary Offset: 0 00:13:36.316 Maximum Single Source Range Length: 65535 00:13:36.316 Maximum Copy Length: 65535 00:13:36.316 Maximum Source Range Count: 1 00:13:36.316 NGUID/EUI64 Never Reused: No 00:13:36.316 Namespace Write Protected: No 00:13:36.316 Number of LBA Formats: 1 00:13:36.316 Current LBA Format: LBA Format #00 00:13:36.316 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:36.316 00:13:36.316 19:29:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:36.316 EAL: No free 2048 kB hugepages reported on node 1 00:13:36.316 [2024-05-15 19:29:02.413588] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:41.599 Initializing NVMe Controllers 00:13:41.599 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:41.599 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:41.599 Initialization complete. Launching workers. 00:13:41.599 ======================================================== 00:13:41.599 Latency(us) 00:13:41.599 Device Information : IOPS MiB/s Average min max 00:13:41.599 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 44106.28 172.29 2901.53 902.84 8026.46 00:13:41.599 ======================================================== 00:13:41.599 Total : 44106.28 172.29 2901.53 902.84 8026.46 00:13:41.599 00:13:41.599 [2024-05-15 19:29:07.521533] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:41.599 19:29:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:41.599 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.599 [2024-05-15 19:29:07.725196] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:46.880 Initializing NVMe Controllers 00:13:46.880 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:46.880 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:46.880 Initialization complete. Launching workers. 00:13:46.880 ======================================================== 00:13:46.880 Latency(us) 00:13:46.880 Device Information : IOPS MiB/s Average min max 00:13:46.880 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33915.83 132.48 3773.32 1212.05 7607.11 00:13:46.880 ======================================================== 00:13:46.880 Total : 33915.83 132.48 3773.32 1212.05 7607.11 00:13:46.880 00:13:46.880 [2024-05-15 19:29:12.745110] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:46.880 19:29:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:46.880 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.880 [2024-05-15 19:29:12.979726] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:52.235 [2024-05-15 19:29:18.119412] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:52.235 Initializing NVMe Controllers 00:13:52.235 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:52.235 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:52.235 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:52.235 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:52.235 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:52.235 Initialization complete. Launching workers. 00:13:52.235 Starting thread on core 2 00:13:52.235 Starting thread on core 3 00:13:52.235 Starting thread on core 1 00:13:52.235 19:29:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:52.235 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.235 [2024-05-15 19:29:18.399897] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:55.534 [2024-05-15 19:29:21.447496] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:55.534 Initializing NVMe Controllers 00:13:55.534 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:55.534 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:55.534 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:55.534 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:55.534 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:55.534 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:55.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:55.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:55.534 Initialization complete. Launching workers. 00:13:55.534 Starting thread on core 1 with urgent priority queue 00:13:55.534 Starting thread on core 2 with urgent priority queue 00:13:55.534 Starting thread on core 3 with urgent priority queue 00:13:55.534 Starting thread on core 0 with urgent priority queue 00:13:55.534 SPDK bdev Controller (SPDK2 ) core 0: 10935.33 IO/s 9.14 secs/100000 ios 00:13:55.534 SPDK bdev Controller (SPDK2 ) core 1: 12452.00 IO/s 8.03 secs/100000 ios 00:13:55.534 SPDK bdev Controller (SPDK2 ) core 2: 8344.67 IO/s 11.98 secs/100000 ios 00:13:55.534 SPDK bdev Controller (SPDK2 ) core 3: 12477.00 IO/s 8.01 secs/100000 ios 00:13:55.534 ======================================================== 00:13:55.534 00:13:55.534 19:29:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:55.534 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.794 [2024-05-15 19:29:21.722768] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:55.794 Initializing NVMe Controllers 00:13:55.794 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:55.794 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:55.794 Namespace ID: 1 size: 0GB 00:13:55.794 Initialization complete. 00:13:55.794 INFO: using host memory buffer for IO 00:13:55.794 Hello world! 00:13:55.794 [2024-05-15 19:29:21.731813] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:55.794 19:29:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:55.794 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.054 [2024-05-15 19:29:21.999603] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:56.995 Initializing NVMe Controllers 00:13:56.995 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:56.995 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:56.995 Initialization complete. Launching workers. 00:13:56.995 submit (in ns) avg, min, max = 9860.0, 3933.3, 4005613.3 00:13:56.995 complete (in ns) avg, min, max = 19998.6, 2398.3, 4005510.8 00:13:56.995 00:13:56.995 Submit histogram 00:13:56.995 ================ 00:13:56.995 Range in us Cumulative Count 00:13:56.995 3.920 - 3.947: 0.2490% ( 38) 00:13:56.995 3.947 - 3.973: 2.9487% ( 412) 00:13:56.995 3.973 - 4.000: 9.1082% ( 940) 00:13:56.995 4.000 - 4.027: 19.0486% ( 1517) 00:13:56.995 4.027 - 4.053: 30.5419% ( 1754) 00:13:56.995 4.053 - 4.080: 42.1598% ( 1773) 00:13:56.995 4.080 - 4.107: 57.6830% ( 2369) 00:13:56.995 4.107 - 4.133: 75.3751% ( 2700) 00:13:56.995 4.133 - 4.160: 88.6246% ( 2022) 00:13:56.995 4.160 - 4.187: 95.3673% ( 1029) 00:13:56.995 4.187 - 4.213: 98.0932% ( 416) 00:13:56.995 4.213 - 4.240: 99.0433% ( 145) 00:13:56.995 4.240 - 4.267: 99.3251% ( 43) 00:13:56.995 4.267 - 4.293: 99.3513% ( 4) 00:13:56.995 4.293 - 4.320: 99.3578% ( 1) 00:13:56.995 4.320 - 4.347: 99.3709% ( 2) 00:13:56.995 4.533 - 4.560: 99.3775% ( 1) 00:13:56.995 4.560 - 4.587: 99.3906% ( 2) 00:13:56.995 4.613 - 4.640: 99.3972% ( 1) 00:13:56.995 4.747 - 4.773: 99.4037% ( 1) 00:13:56.995 4.800 - 4.827: 99.4168% ( 2) 00:13:56.995 4.880 - 4.907: 99.4234% ( 1) 00:13:56.995 4.907 - 4.933: 99.4365% ( 2) 00:13:56.995 4.960 - 4.987: 99.4430% ( 1) 00:13:56.995 5.093 - 5.120: 99.4496% ( 1) 00:13:56.995 5.120 - 5.147: 99.4561% ( 1) 00:13:56.995 5.173 - 5.200: 99.4627% ( 1) 00:13:56.995 5.227 - 5.253: 99.4692% ( 1) 00:13:56.995 5.333 - 5.360: 99.4758% ( 1) 00:13:56.995 5.600 - 5.627: 99.4823% ( 1) 00:13:56.995 5.627 - 5.653: 99.4889% ( 1) 00:13:56.995 5.813 - 5.840: 99.4954% ( 1) 00:13:56.995 5.893 - 5.920: 99.5020% ( 1) 00:13:56.995 6.027 - 6.053: 99.5086% ( 1) 00:13:56.995 6.107 - 6.133: 99.5217% ( 2) 00:13:56.995 6.160 - 6.187: 99.5282% ( 1) 00:13:56.995 6.187 - 6.213: 99.5479% ( 3) 00:13:56.995 6.267 - 6.293: 99.5544% ( 1) 00:13:56.995 6.320 - 6.347: 99.5610% ( 1) 00:13:56.995 6.533 - 6.560: 99.5675% ( 1) 00:13:56.995 6.640 - 6.667: 99.5741% ( 1) 00:13:56.995 6.667 - 6.693: 99.5806% ( 1) 00:13:56.995 6.693 - 6.720: 99.5937% ( 2) 00:13:56.995 6.720 - 6.747: 99.6003% ( 1) 00:13:56.995 6.880 - 6.933: 99.6068% ( 1) 00:13:56.995 6.933 - 6.987: 99.6134% ( 1) 00:13:56.995 6.987 - 7.040: 99.6265% ( 2) 00:13:56.995 7.040 - 7.093: 99.6396% ( 2) 00:13:56.995 7.093 - 7.147: 99.6462% ( 1) 00:13:56.995 7.200 - 7.253: 99.6527% ( 1) 00:13:56.995 7.253 - 7.307: 99.6593% ( 1) 00:13:56.995 7.307 - 7.360: 99.6658% ( 1) 00:13:56.995 7.413 - 7.467: 99.6724% ( 1) 00:13:56.995 7.467 - 7.520: 99.6920% ( 3) 00:13:56.995 7.520 - 7.573: 99.7117% ( 3) 00:13:56.995 7.627 - 7.680: 99.7182% ( 1) 00:13:56.995 7.680 - 7.733: 99.7313% ( 2) 00:13:56.995 7.733 - 7.787: 99.7379% ( 1) 00:13:56.995 7.787 - 7.840: 99.7641% ( 4) 00:13:56.995 7.840 - 7.893: 99.7838% ( 3) 00:13:56.995 7.947 - 8.000: 99.7903% ( 1) 00:13:56.995 8.053 - 8.107: 99.7969% ( 1) 00:13:56.995 8.107 - 8.160: 99.8034% ( 1) 00:13:56.995 8.160 - 8.213: 99.8165% ( 2) 00:13:56.995 8.213 - 8.267: 99.8231% ( 1) 00:13:56.995 8.587 - 8.640: 99.8296% ( 1) 00:13:56.995 8.800 - 8.853: 99.8362% ( 1) 00:13:56.995 9.013 - 9.067: 99.8427% ( 1) 00:13:56.995 9.547 - 9.600: 99.8493% ( 1) 00:13:56.995 12.800 - 12.853: 99.8558% ( 1) 00:13:56.995 3986.773 - 4014.080: 100.0000% ( 22) 00:13:56.995 00:13:56.995 Complete histogram 00:13:56.995 ================== 00:13:56.995 Range in us Cumulative Count 00:13:56.995 2.387 - 2.400: 0.0066% ( 1) 00:13:56.995 2.400 - 2.413: 0.3538% ( 53) 00:13:56.995 2.413 - 2.427: 1.5595% ( 184) 00:13:56.995 2.427 - 2.440: 1.7561% ( 30) 00:13:56.995 2.440 - 2.453: 2.0248% ( 41) 00:13:56.995 2.453 - 2.467: 42.2711% ( 6142) 00:13:56.995 2.467 - [2024-05-15 19:29:23.093961] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:56.995 2.480: 51.0124% ( 1334) 00:13:56.995 2.480 - 2.493: 67.4989% ( 2516) 00:13:56.995 2.493 - 2.507: 77.0592% ( 1459) 00:13:56.995 2.507 - 2.520: 81.5215% ( 681) 00:13:56.995 2.520 - 2.533: 82.9762% ( 222) 00:13:56.995 2.533 - 2.547: 86.9013% ( 599) 00:13:56.995 2.547 - 2.560: 91.8485% ( 755) 00:13:56.995 2.560 - 2.573: 94.9938% ( 480) 00:13:56.995 2.573 - 2.587: 97.4379% ( 373) 00:13:56.995 2.587 - 2.600: 98.6567% ( 186) 00:13:56.995 2.600 - 2.613: 99.1154% ( 70) 00:13:56.995 2.613 - 2.627: 99.2399% ( 19) 00:13:56.995 2.627 - 2.640: 99.2727% ( 5) 00:13:56.995 2.640 - 2.653: 99.2858% ( 2) 00:13:56.995 2.653 - 2.667: 99.2923% ( 1) 00:13:56.995 2.667 - 2.680: 99.3054% ( 2) 00:13:56.995 2.707 - 2.720: 99.3185% ( 2) 00:13:56.995 2.760 - 2.773: 99.3251% ( 1) 00:13:56.995 2.907 - 2.920: 99.3316% ( 1) 00:13:56.995 2.987 - 3.000: 99.3382% ( 1) 00:13:56.995 3.000 - 3.013: 99.3447% ( 1) 00:13:56.995 3.040 - 3.053: 99.3513% ( 1) 00:13:56.995 3.133 - 3.147: 99.3578% ( 1) 00:13:56.995 3.173 - 3.187: 99.3644% ( 1) 00:13:56.995 3.187 - 3.200: 99.3709% ( 1) 00:13:56.995 3.440 - 3.467: 99.3775% ( 1) 00:13:56.995 4.800 - 4.827: 99.3841% ( 1) 00:13:56.995 4.827 - 4.853: 99.3906% ( 1) 00:13:56.995 4.933 - 4.960: 99.3972% ( 1) 00:13:56.995 5.120 - 5.147: 99.4037% ( 1) 00:13:56.995 5.227 - 5.253: 99.4103% ( 1) 00:13:56.995 5.253 - 5.280: 99.4168% ( 1) 00:13:56.995 5.413 - 5.440: 99.4234% ( 1) 00:13:56.995 5.653 - 5.680: 99.4299% ( 1) 00:13:56.995 5.707 - 5.733: 99.4365% ( 1) 00:13:56.995 5.787 - 5.813: 99.4496% ( 2) 00:13:56.995 5.920 - 5.947: 99.4627% ( 2) 00:13:56.995 6.160 - 6.187: 99.4692% ( 1) 00:13:56.995 6.267 - 6.293: 99.4758% ( 1) 00:13:56.995 6.293 - 6.320: 99.4823% ( 1) 00:13:56.995 6.347 - 6.373: 99.4954% ( 2) 00:13:56.995 6.400 - 6.427: 99.5020% ( 1) 00:13:56.995 6.507 - 6.533: 99.5086% ( 1) 00:13:56.995 6.693 - 6.720: 99.5151% ( 1) 00:13:56.995 6.827 - 6.880: 99.5282% ( 2) 00:13:56.995 8.587 - 8.640: 99.5348% ( 1) 00:13:56.995 11.627 - 11.680: 99.5413% ( 1) 00:13:56.995 13.067 - 13.120: 99.5479% ( 1) 00:13:56.995 44.373 - 44.587: 99.5544% ( 1) 00:13:56.995 82.773 - 83.200: 99.5610% ( 1) 00:13:56.995 3263.147 - 3276.800: 99.5675% ( 1) 00:13:56.995 3986.773 - 4014.080: 100.0000% ( 66) 00:13:56.995 00:13:56.995 19:29:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:56.995 19:29:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:56.995 19:29:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:56.995 19:29:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:56.995 19:29:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:57.256 [ 00:13:57.256 { 00:13:57.256 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:57.256 "subtype": "Discovery", 00:13:57.256 "listen_addresses": [], 00:13:57.256 "allow_any_host": true, 00:13:57.256 "hosts": [] 00:13:57.256 }, 00:13:57.256 { 00:13:57.256 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:57.256 "subtype": "NVMe", 00:13:57.256 "listen_addresses": [ 00:13:57.256 { 00:13:57.256 "trtype": "VFIOUSER", 00:13:57.256 "adrfam": "IPv4", 00:13:57.256 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:57.256 "trsvcid": "0" 00:13:57.256 } 00:13:57.256 ], 00:13:57.256 "allow_any_host": true, 00:13:57.256 "hosts": [], 00:13:57.256 "serial_number": "SPDK1", 00:13:57.256 "model_number": "SPDK bdev Controller", 00:13:57.256 "max_namespaces": 32, 00:13:57.256 "min_cntlid": 1, 00:13:57.256 "max_cntlid": 65519, 00:13:57.256 "namespaces": [ 00:13:57.256 { 00:13:57.256 "nsid": 1, 00:13:57.256 "bdev_name": "Malloc1", 00:13:57.256 "name": "Malloc1", 00:13:57.256 "nguid": "357941CAF325452AAD37C3C2D63AF1BC", 00:13:57.256 "uuid": "357941ca-f325-452a-ad37-c3c2d63af1bc" 00:13:57.256 }, 00:13:57.256 { 00:13:57.256 "nsid": 2, 00:13:57.256 "bdev_name": "Malloc3", 00:13:57.256 "name": "Malloc3", 00:13:57.256 "nguid": "EAE4CCF75A214480B54FE9D51F83FA13", 00:13:57.256 "uuid": "eae4ccf7-5a21-4480-b54f-e9d51f83fa13" 00:13:57.256 } 00:13:57.256 ] 00:13:57.256 }, 00:13:57.256 { 00:13:57.256 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:57.256 "subtype": "NVMe", 00:13:57.256 "listen_addresses": [ 00:13:57.256 { 00:13:57.256 "trtype": "VFIOUSER", 00:13:57.256 "adrfam": "IPv4", 00:13:57.256 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:57.256 "trsvcid": "0" 00:13:57.256 } 00:13:57.256 ], 00:13:57.256 "allow_any_host": true, 00:13:57.256 "hosts": [], 00:13:57.256 "serial_number": "SPDK2", 00:13:57.256 "model_number": "SPDK bdev Controller", 00:13:57.256 "max_namespaces": 32, 00:13:57.256 "min_cntlid": 1, 00:13:57.256 "max_cntlid": 65519, 00:13:57.256 "namespaces": [ 00:13:57.256 { 00:13:57.256 "nsid": 1, 00:13:57.256 "bdev_name": "Malloc2", 00:13:57.256 "name": "Malloc2", 00:13:57.256 "nguid": "900F964098D14D80B11D595DBD365723", 00:13:57.256 "uuid": "900f9640-98d1-4d80-b11d-595dbd365723" 00:13:57.256 } 00:13:57.256 ] 00:13:57.256 } 00:13:57.256 ] 00:13:57.256 19:29:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:57.256 19:29:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:57.256 19:29:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3507119 00:13:57.256 19:29:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:57.256 19:29:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:13:57.256 19:29:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:57.256 19:29:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:57.256 19:29:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:13:57.256 19:29:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:57.256 19:29:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:57.256 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.516 [2024-05-15 19:29:23.526734] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:57.516 Malloc4 00:13:57.516 19:29:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:57.776 [2024-05-15 19:29:23.782382] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:57.776 19:29:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:57.776 Asynchronous Event Request test 00:13:57.776 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:57.776 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:57.776 Registering asynchronous event callbacks... 00:13:57.776 Starting namespace attribute notice tests for all controllers... 00:13:57.776 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:57.776 aer_cb - Changed Namespace 00:13:57.776 Cleaning up... 00:13:58.036 [ 00:13:58.036 { 00:13:58.036 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:58.036 "subtype": "Discovery", 00:13:58.036 "listen_addresses": [], 00:13:58.036 "allow_any_host": true, 00:13:58.036 "hosts": [] 00:13:58.036 }, 00:13:58.036 { 00:13:58.036 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:58.036 "subtype": "NVMe", 00:13:58.036 "listen_addresses": [ 00:13:58.036 { 00:13:58.036 "trtype": "VFIOUSER", 00:13:58.036 "adrfam": "IPv4", 00:13:58.036 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:58.036 "trsvcid": "0" 00:13:58.036 } 00:13:58.036 ], 00:13:58.036 "allow_any_host": true, 00:13:58.036 "hosts": [], 00:13:58.036 "serial_number": "SPDK1", 00:13:58.036 "model_number": "SPDK bdev Controller", 00:13:58.036 "max_namespaces": 32, 00:13:58.036 "min_cntlid": 1, 00:13:58.036 "max_cntlid": 65519, 00:13:58.036 "namespaces": [ 00:13:58.036 { 00:13:58.036 "nsid": 1, 00:13:58.036 "bdev_name": "Malloc1", 00:13:58.036 "name": "Malloc1", 00:13:58.036 "nguid": "357941CAF325452AAD37C3C2D63AF1BC", 00:13:58.036 "uuid": "357941ca-f325-452a-ad37-c3c2d63af1bc" 00:13:58.036 }, 00:13:58.036 { 00:13:58.036 "nsid": 2, 00:13:58.036 "bdev_name": "Malloc3", 00:13:58.036 "name": "Malloc3", 00:13:58.036 "nguid": "EAE4CCF75A214480B54FE9D51F83FA13", 00:13:58.036 "uuid": "eae4ccf7-5a21-4480-b54f-e9d51f83fa13" 00:13:58.036 } 00:13:58.036 ] 00:13:58.036 }, 00:13:58.036 { 00:13:58.036 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:58.036 "subtype": "NVMe", 00:13:58.036 "listen_addresses": [ 00:13:58.036 { 00:13:58.036 "trtype": "VFIOUSER", 00:13:58.036 "adrfam": "IPv4", 00:13:58.036 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:58.036 "trsvcid": "0" 00:13:58.036 } 00:13:58.036 ], 00:13:58.036 "allow_any_host": true, 00:13:58.036 "hosts": [], 00:13:58.036 "serial_number": "SPDK2", 00:13:58.036 "model_number": "SPDK bdev Controller", 00:13:58.036 "max_namespaces": 32, 00:13:58.036 "min_cntlid": 1, 00:13:58.036 "max_cntlid": 65519, 00:13:58.036 "namespaces": [ 00:13:58.036 { 00:13:58.036 "nsid": 1, 00:13:58.036 "bdev_name": "Malloc2", 00:13:58.036 "name": "Malloc2", 00:13:58.036 "nguid": "900F964098D14D80B11D595DBD365723", 00:13:58.036 "uuid": "900f9640-98d1-4d80-b11d-595dbd365723" 00:13:58.036 }, 00:13:58.036 { 00:13:58.036 "nsid": 2, 00:13:58.036 "bdev_name": "Malloc4", 00:13:58.036 "name": "Malloc4", 00:13:58.037 "nguid": "F5643F2D4B4842148B1048351E345207", 00:13:58.037 "uuid": "f5643f2d-4b48-4214-8b10-48351e345207" 00:13:58.037 } 00:13:58.037 ] 00:13:58.037 } 00:13:58.037 ] 00:13:58.037 19:29:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3507119 00:13:58.037 19:29:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:58.037 19:29:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3497909 00:13:58.037 19:29:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3497909 ']' 00:13:58.037 19:29:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3497909 00:13:58.037 19:29:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:13:58.037 19:29:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:58.037 19:29:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3497909 00:13:58.037 19:29:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:58.037 19:29:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:58.037 19:29:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3497909' 00:13:58.037 killing process with pid 3497909 00:13:58.037 19:29:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3497909 00:13:58.037 [2024-05-15 19:29:24.069693] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:58.037 19:29:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3497909 00:13:58.297 19:29:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:58.297 19:29:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:58.297 19:29:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:58.297 19:29:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:58.297 19:29:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:58.297 19:29:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3507320 00:13:58.297 19:29:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3507320' 00:13:58.297 Process pid: 3507320 00:13:58.297 19:29:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:58.297 19:29:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3507320 00:13:58.297 19:29:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:58.297 19:29:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3507320 ']' 00:13:58.297 19:29:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.297 19:29:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:58.297 19:29:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.297 19:29:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:58.297 19:29:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:58.297 [2024-05-15 19:29:24.306439] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:58.297 [2024-05-15 19:29:24.307370] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:13:58.297 [2024-05-15 19:29:24.307412] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.297 EAL: No free 2048 kB hugepages reported on node 1 00:13:58.297 [2024-05-15 19:29:24.390083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:58.297 [2024-05-15 19:29:24.455153] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.297 [2024-05-15 19:29:24.455191] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.297 [2024-05-15 19:29:24.455199] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.297 [2024-05-15 19:29:24.455205] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.297 [2024-05-15 19:29:24.455210] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.297 [2024-05-15 19:29:24.455269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.297 [2024-05-15 19:29:24.455384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.297 [2024-05-15 19:29:24.455689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:58.297 [2024-05-15 19:29:24.455690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.558 [2024-05-15 19:29:24.521240] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:58.558 [2024-05-15 19:29:24.521310] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:58.558 [2024-05-15 19:29:24.521680] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:58.558 [2024-05-15 19:29:24.522288] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:58.558 [2024-05-15 19:29:24.522292] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:59.129 19:29:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:59.129 19:29:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:13:59.129 19:29:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:00.158 19:29:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:00.419 19:29:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:00.419 19:29:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:00.419 19:29:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:00.419 19:29:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:00.419 19:29:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:00.419 Malloc1 00:14:00.679 19:29:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:00.679 19:29:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:00.939 19:29:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:01.199 [2024-05-15 19:29:27.216152] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:01.199 19:29:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:01.199 19:29:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:01.199 19:29:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:01.459 Malloc2 00:14:01.459 19:29:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:01.720 19:29:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:01.720 19:29:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:01.979 19:29:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:01.979 19:29:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3507320 00:14:01.979 19:29:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3507320 ']' 00:14:01.979 19:29:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3507320 00:14:01.979 19:29:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:14:01.979 19:29:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:01.979 19:29:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3507320 00:14:02.239 19:29:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:02.239 19:29:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:02.239 19:29:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3507320' 00:14:02.239 killing process with pid 3507320 00:14:02.239 19:29:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3507320 00:14:02.239 [2024-05-15 19:29:28.175624] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:02.239 19:29:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3507320 00:14:02.239 19:29:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:02.239 19:29:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:02.239 00:14:02.239 real 0m52.401s 00:14:02.239 user 3m28.209s 00:14:02.239 sys 0m3.297s 00:14:02.239 19:29:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:02.239 19:29:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:02.239 ************************************ 00:14:02.239 END TEST nvmf_vfio_user 00:14:02.239 ************************************ 00:14:02.239 19:29:28 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:02.239 19:29:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:02.239 19:29:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:02.239 19:29:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:02.239 ************************************ 00:14:02.239 START TEST nvmf_vfio_user_nvme_compliance 00:14:02.239 ************************************ 00:14:02.239 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:02.499 * Looking for test storage... 00:14:02.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.499 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3508209 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3508209' 00:14:02.500 Process pid: 3508209 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3508209 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 3508209 ']' 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:02.500 19:29:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:02.500 [2024-05-15 19:29:28.594507] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:14:02.500 [2024-05-15 19:29:28.594575] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.500 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.760 [2024-05-15 19:29:28.684744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:02.760 [2024-05-15 19:29:28.755188] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.760 [2024-05-15 19:29:28.755229] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.760 [2024-05-15 19:29:28.755237] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.760 [2024-05-15 19:29:28.755243] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.760 [2024-05-15 19:29:28.755249] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.760 [2024-05-15 19:29:28.755299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.760 [2024-05-15 19:29:28.755441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.760 [2024-05-15 19:29:28.755539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.330 19:29:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:03.330 19:29:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:14:03.330 19:29:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:04.712 malloc0 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:04.712 [2024-05-15 19:29:30.557096] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.712 19:29:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:04.712 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.712 00:14:04.712 00:14:04.712 CUnit - A unit testing framework for C - Version 2.1-3 00:14:04.712 http://cunit.sourceforge.net/ 00:14:04.712 00:14:04.712 00:14:04.712 Suite: nvme_compliance 00:14:04.712 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-15 19:29:30.741849] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.712 [2024-05-15 19:29:30.743214] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:04.712 [2024-05-15 19:29:30.743229] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:04.712 [2024-05-15 19:29:30.743235] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:04.712 [2024-05-15 19:29:30.744867] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.712 passed 00:14:04.712 Test: admin_identify_ctrlr_verify_fused ...[2024-05-15 19:29:30.839485] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.712 [2024-05-15 19:29:30.842508] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.712 passed 00:14:04.972 Test: admin_identify_ns ...[2024-05-15 19:29:30.938583] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.972 [2024-05-15 19:29:30.998329] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:04.972 [2024-05-15 19:29:31.006331] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:04.972 [2024-05-15 19:29:31.027438] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.972 passed 00:14:04.972 Test: admin_get_features_mandatory_features ...[2024-05-15 19:29:31.122490] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.972 [2024-05-15 19:29:31.125517] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.233 passed 00:14:05.233 Test: admin_get_features_optional_features ...[2024-05-15 19:29:31.220069] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.233 [2024-05-15 19:29:31.223081] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.233 passed 00:14:05.233 Test: admin_set_features_number_of_queues ...[2024-05-15 19:29:31.316250] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.493 [2024-05-15 19:29:31.418422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.493 passed 00:14:05.493 Test: admin_get_log_page_mandatory_logs ...[2024-05-15 19:29:31.513461] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.493 [2024-05-15 19:29:31.516476] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.493 passed 00:14:05.493 Test: admin_get_log_page_with_lpo ...[2024-05-15 19:29:31.609586] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.493 [2024-05-15 19:29:31.677328] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:05.753 [2024-05-15 19:29:31.690393] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.753 passed 00:14:05.753 Test: fabric_property_get ...[2024-05-15 19:29:31.784489] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.753 [2024-05-15 19:29:31.785768] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:05.753 [2024-05-15 19:29:31.787511] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.753 passed 00:14:05.753 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-15 19:29:31.882056] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.753 [2024-05-15 19:29:31.883277] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:05.753 [2024-05-15 19:29:31.885061] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.753 passed 00:14:06.013 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-15 19:29:31.978577] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:06.013 [2024-05-15 19:29:32.062325] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:06.013 [2024-05-15 19:29:32.078318] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:06.013 [2024-05-15 19:29:32.083414] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:06.013 passed 00:14:06.013 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-15 19:29:32.177400] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:06.014 [2024-05-15 19:29:32.178626] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:06.014 [2024-05-15 19:29:32.180418] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:06.274 passed 00:14:06.274 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-15 19:29:32.272535] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:06.274 [2024-05-15 19:29:32.349331] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:06.274 [2024-05-15 19:29:32.373319] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:06.274 [2024-05-15 19:29:32.378402] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:06.274 passed 00:14:06.535 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-15 19:29:32.472018] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:06.535 [2024-05-15 19:29:32.473237] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:06.535 [2024-05-15 19:29:32.473257] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:06.535 [2024-05-15 19:29:32.475033] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:06.535 passed 00:14:06.535 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-15 19:29:32.567188] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:06.535 [2024-05-15 19:29:32.658333] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:06.535 [2024-05-15 19:29:32.666320] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:06.535 [2024-05-15 19:29:32.674321] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:06.535 [2024-05-15 19:29:32.682324] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:06.535 [2024-05-15 19:29:32.711420] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:06.795 passed 00:14:06.795 Test: admin_create_io_sq_verify_pc ...[2024-05-15 19:29:32.804067] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:06.795 [2024-05-15 19:29:32.820328] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:06.795 [2024-05-15 19:29:32.838220] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:06.795 passed 00:14:06.795 Test: admin_create_io_qp_max_qps ...[2024-05-15 19:29:32.929782] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:08.180 [2024-05-15 19:29:34.035325] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:14:08.441 [2024-05-15 19:29:34.438988] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:08.441 passed 00:14:08.441 Test: admin_create_io_sq_shared_cq ...[2024-05-15 19:29:34.532582] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:08.701 [2024-05-15 19:29:34.663329] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:08.701 [2024-05-15 19:29:34.700393] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:08.701 passed 00:14:08.701 00:14:08.702 Run Summary: Type Total Ran Passed Failed Inactive 00:14:08.702 suites 1 1 n/a 0 0 00:14:08.702 tests 18 18 18 0 0 00:14:08.702 asserts 360 360 360 0 n/a 00:14:08.702 00:14:08.702 Elapsed time = 1.659 seconds 00:14:08.702 19:29:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3508209 00:14:08.702 19:29:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 3508209 ']' 00:14:08.702 19:29:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 3508209 00:14:08.702 19:29:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:14:08.702 19:29:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:08.702 19:29:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3508209 00:14:08.702 19:29:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:08.702 19:29:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:08.702 19:29:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3508209' 00:14:08.702 killing process with pid 3508209 00:14:08.702 19:29:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 3508209 00:14:08.702 [2024-05-15 19:29:34.811728] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:08.702 19:29:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 3508209 00:14:08.963 19:29:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:08.963 19:29:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:08.963 00:14:08.963 real 0m6.545s 00:14:08.963 user 0m18.779s 00:14:08.963 sys 0m0.503s 00:14:08.963 19:29:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:08.963 19:29:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:08.963 ************************************ 00:14:08.963 END TEST nvmf_vfio_user_nvme_compliance 00:14:08.963 ************************************ 00:14:08.963 19:29:34 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:08.963 19:29:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:08.963 19:29:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:08.963 19:29:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:08.963 ************************************ 00:14:08.963 START TEST nvmf_vfio_user_fuzz 00:14:08.963 ************************************ 00:14:08.963 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:08.963 * Looking for test storage... 00:14:08.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.963 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.963 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:08.963 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.963 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.963 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3509606 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3509606' 00:14:09.224 Process pid: 3509606 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:09.224 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3509606 00:14:09.225 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 3509606 ']' 00:14:09.225 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.225 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:09.225 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.225 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:09.225 19:29:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:10.168 19:29:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:10.168 19:29:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:14:10.168 19:29:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:11.111 malloc0 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:11.111 19:29:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:43.225 Fuzzing completed. Shutting down the fuzz application 00:14:43.225 00:14:43.225 Dumping successful admin opcodes: 00:14:43.225 8, 9, 10, 24, 00:14:43.225 Dumping successful io opcodes: 00:14:43.225 0, 00:14:43.225 NS: 0x200003a1ef00 I/O qp, Total commands completed: 972746, total successful commands: 3809, random_seed: 120344768 00:14:43.225 NS: 0x200003a1ef00 admin qp, Total commands completed: 239622, total successful commands: 1927, random_seed: 4219918848 00:14:43.225 19:30:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:43.225 19:30:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.225 19:30:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:43.225 19:30:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.225 19:30:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3509606 00:14:43.225 19:30:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 3509606 ']' 00:14:43.225 19:30:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 3509606 00:14:43.225 19:30:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:14:43.225 19:30:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:43.225 19:30:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3509606 00:14:43.225 19:30:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:43.225 19:30:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:43.226 19:30:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3509606' 00:14:43.226 killing process with pid 3509606 00:14:43.226 19:30:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 3509606 00:14:43.226 19:30:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 3509606 00:14:43.226 19:30:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:43.226 19:30:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:43.226 00:14:43.226 real 0m32.790s 00:14:43.226 user 0m37.560s 00:14:43.226 sys 0m24.290s 00:14:43.226 19:30:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:43.226 19:30:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:43.226 ************************************ 00:14:43.226 END TEST nvmf_vfio_user_fuzz 00:14:43.226 ************************************ 00:14:43.226 19:30:07 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:43.226 19:30:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:43.226 19:30:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:43.226 19:30:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:43.226 ************************************ 00:14:43.226 START TEST nvmf_host_management 00:14:43.226 ************************************ 00:14:43.226 19:30:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:43.226 * Looking for test storage... 00:14:43.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:43.226 19:30:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:51.366 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:51.366 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:51.366 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:51.366 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:51.366 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:51.367 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:51.367 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:51.367 Found net devices under 0000:31:00.0: cvl_0_0 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:51.367 Found net devices under 0000:31:00.1: cvl_0_1 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:51.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:14:51.367 00:14:51.367 --- 10.0.0.2 ping statistics --- 00:14:51.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.367 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:51.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.483 ms 00:14:51.367 00:14:51.367 --- 10.0.0.1 ping statistics --- 00:14:51.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.367 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:51.367 19:30:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:51.368 19:30:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:51.368 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:51.368 19:30:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:51.368 19:30:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:51.368 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3520836 00:14:51.368 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3520836 00:14:51.368 19:30:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:51.368 19:30:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3520836 ']' 00:14:51.368 19:30:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.368 19:30:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:51.368 19:30:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.368 19:30:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:51.368 19:30:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:51.368 [2024-05-15 19:30:16.683931] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:14:51.368 [2024-05-15 19:30:16.683997] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.368 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.368 [2024-05-15 19:30:16.761805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:51.368 [2024-05-15 19:30:16.836059] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.368 [2024-05-15 19:30:16.836099] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.368 [2024-05-15 19:30:16.836106] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.368 [2024-05-15 19:30:16.836113] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.368 [2024-05-15 19:30:16.836118] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.368 [2024-05-15 19:30:16.836229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.368 [2024-05-15 19:30:16.836379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:51.368 [2024-05-15 19:30:16.836639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:51.368 [2024-05-15 19:30:16.836641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:51.629 [2024-05-15 19:30:17.612245] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:51.629 Malloc0 00:14:51.629 [2024-05-15 19:30:17.675334] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:51.629 [2024-05-15 19:30:17.675554] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3521009 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3521009 /var/tmp/bdevperf.sock 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3521009 ']' 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:51.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:51.629 { 00:14:51.629 "params": { 00:14:51.629 "name": "Nvme$subsystem", 00:14:51.629 "trtype": "$TEST_TRANSPORT", 00:14:51.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:51.629 "adrfam": "ipv4", 00:14:51.629 "trsvcid": "$NVMF_PORT", 00:14:51.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:51.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:51.629 "hdgst": ${hdgst:-false}, 00:14:51.629 "ddgst": ${ddgst:-false} 00:14:51.629 }, 00:14:51.629 "method": "bdev_nvme_attach_controller" 00:14:51.629 } 00:14:51.629 EOF 00:14:51.629 )") 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:51.629 19:30:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:51.629 "params": { 00:14:51.629 "name": "Nvme0", 00:14:51.629 "trtype": "tcp", 00:14:51.629 "traddr": "10.0.0.2", 00:14:51.629 "adrfam": "ipv4", 00:14:51.629 "trsvcid": "4420", 00:14:51.629 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:51.629 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:51.629 "hdgst": false, 00:14:51.629 "ddgst": false 00:14:51.629 }, 00:14:51.629 "method": "bdev_nvme_attach_controller" 00:14:51.629 }' 00:14:51.629 [2024-05-15 19:30:17.777014] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:14:51.629 [2024-05-15 19:30:17.777064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3521009 ] 00:14:51.629 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.890 [2024-05-15 19:30:17.858621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.890 [2024-05-15 19:30:17.923259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.150 Running I/O for 10 seconds... 00:14:52.724 19:30:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:52.724 19:30:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:14:52.724 19:30:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:52.724 19:30:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.724 19:30:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:52.724 19:30:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.724 19:30:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:52.724 19:30:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:52.724 19:30:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:52.724 19:30:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:52.724 19:30:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:52.724 19:30:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:52.724 19:30:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:52.724 19:30:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:52.724 19:30:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:52.725 19:30:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:52.725 19:30:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.725 19:30:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:52.725 19:30:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.725 19:30:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=583 00:14:52.725 19:30:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 583 -ge 100 ']' 00:14:52.725 19:30:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:52.725 19:30:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:52.725 19:30:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:52.725 19:30:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:52.725 19:30:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.725 19:30:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:52.725 [2024-05-15 19:30:18.719543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.719987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.719993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.720002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.720009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.720018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.720026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.720035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.720042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.720051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.720057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.720066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.720074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.720083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.720092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.720102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.720108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.720117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.720125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.720134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.720142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.720151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.725 [2024-05-15 19:30:18.720158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.725 [2024-05-15 19:30:18.720167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:52.726 [2024-05-15 19:30:18.720643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720695] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa9e080 was disconnected and freed. reset controller. 00:14:52.726 [2024-05-15 19:30:18.720734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.726 [2024-05-15 19:30:18.720744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.726 [2024-05-15 19:30:18.720760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.726 [2024-05-15 19:30:18.720774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.726 [2024-05-15 19:30:18.720790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.720801] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68d4c0 is same with the state(5) to be set 00:14:52.726 [2024-05-15 19:30:18.721979] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:52.726 task offset: 86272 on job bdev=Nvme0n1 fails 00:14:52.726 00:14:52.726 Latency(us) 00:14:52.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.726 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:52.726 Job: Nvme0n1 ended in about 0.53 seconds with error 00:14:52.726 Verification LBA range: start 0x0 length 0x400 00:14:52.726 Nvme0n1 : 0.53 1219.24 76.20 121.17 0.00 46607.76 1993.39 39103.15 00:14:52.726 =================================================================================================================== 00:14:52.726 Total : 1219.24 76.20 121.17 0.00 46607.76 1993.39 39103.15 00:14:52.726 [2024-05-15 19:30:18.723947] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:52.726 [2024-05-15 19:30:18.723968] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68d4c0 (9): Bad file descriptor 00:14:52.726 19:30:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.726 19:30:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:52.726 19:30:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.726 19:30:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:52.726 [2024-05-15 19:30:18.728666] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:14:52.726 [2024-05-15 19:30:18.728768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:52.726 [2024-05-15 19:30:18.728798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.726 [2024-05-15 19:30:18.728815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:14:52.727 [2024-05-15 19:30:18.728823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:14:52.727 [2024-05-15 19:30:18.728830] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:14:52.727 [2024-05-15 19:30:18.728838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x68d4c0 00:14:52.727 [2024-05-15 19:30:18.728858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68d4c0 (9): Bad file descriptor 00:14:52.727 [2024-05-15 19:30:18.728871] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:14:52.727 [2024-05-15 19:30:18.728878] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:14:52.727 [2024-05-15 19:30:18.728887] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:14:52.727 [2024-05-15 19:30:18.728900] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:52.727 19:30:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.727 19:30:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:53.670 19:30:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3521009 00:14:53.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3521009) - No such process 00:14:53.670 19:30:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:53.670 19:30:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:53.670 19:30:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:53.670 19:30:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:53.670 19:30:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:53.670 19:30:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:53.670 19:30:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:53.670 19:30:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:53.670 { 00:14:53.670 "params": { 00:14:53.670 "name": "Nvme$subsystem", 00:14:53.670 "trtype": "$TEST_TRANSPORT", 00:14:53.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:53.670 "adrfam": "ipv4", 00:14:53.670 "trsvcid": "$NVMF_PORT", 00:14:53.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:53.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:53.670 "hdgst": ${hdgst:-false}, 00:14:53.670 "ddgst": ${ddgst:-false} 00:14:53.670 }, 00:14:53.670 "method": "bdev_nvme_attach_controller" 00:14:53.670 } 00:14:53.670 EOF 00:14:53.670 )") 00:14:53.670 19:30:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:53.670 19:30:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:53.670 19:30:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:53.670 19:30:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:53.670 "params": { 00:14:53.670 "name": "Nvme0", 00:14:53.670 "trtype": "tcp", 00:14:53.670 "traddr": "10.0.0.2", 00:14:53.670 "adrfam": "ipv4", 00:14:53.670 "trsvcid": "4420", 00:14:53.670 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:53.670 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:53.670 "hdgst": false, 00:14:53.670 "ddgst": false 00:14:53.670 }, 00:14:53.670 "method": "bdev_nvme_attach_controller" 00:14:53.670 }' 00:14:53.670 [2024-05-15 19:30:19.801101] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:14:53.670 [2024-05-15 19:30:19.801160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3521503 ] 00:14:53.670 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.931 [2024-05-15 19:30:19.883718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.931 [2024-05-15 19:30:19.948125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.191 Running I/O for 1 seconds... 00:14:55.132 00:14:55.132 Latency(us) 00:14:55.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.132 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:55.132 Verification LBA range: start 0x0 length 0x400 00:14:55.132 Nvme0n1 : 1.01 1272.34 79.52 0.00 0.00 49478.38 11851.09 39976.96 00:14:55.132 =================================================================================================================== 00:14:55.132 Total : 1272.34 79.52 0.00 0.00 49478.38 11851.09 39976.96 00:14:55.132 19:30:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:55.132 19:30:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:55.132 19:30:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:55.132 19:30:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:55.132 19:30:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:55.132 19:30:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:55.132 19:30:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:55.132 19:30:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:55.132 19:30:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:55.132 19:30:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:55.132 19:30:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:55.132 rmmod nvme_tcp 00:14:55.392 rmmod nvme_fabrics 00:14:55.392 rmmod nvme_keyring 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3520836 ']' 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3520836 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 3520836 ']' 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 3520836 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3520836 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3520836' 00:14:55.392 killing process with pid 3520836 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 3520836 00:14:55.392 [2024-05-15 19:30:21.427870] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 3520836 00:14:55.392 [2024-05-15 19:30:21.546467] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.392 19:30:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.947 19:30:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:57.947 19:30:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:57.947 00:14:57.947 real 0m15.730s 00:14:57.947 user 0m23.952s 00:14:57.947 sys 0m7.393s 00:14:57.947 19:30:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:57.947 19:30:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:57.947 ************************************ 00:14:57.947 END TEST nvmf_host_management 00:14:57.947 ************************************ 00:14:57.947 19:30:23 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:57.947 19:30:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:57.947 19:30:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:57.947 19:30:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:57.947 ************************************ 00:14:57.947 START TEST nvmf_lvol 00:14:57.947 ************************************ 00:14:57.947 19:30:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:57.947 * Looking for test storage... 00:14:57.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:57.947 19:30:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:57.947 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:57.947 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.947 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.947 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.947 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.947 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.947 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.947 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.947 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.947 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.947 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.947 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:57.947 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:57.947 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.947 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.947 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:57.948 19:30:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:06.089 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:06.089 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:06.089 Found net devices under 0000:31:00.0: cvl_0_0 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:06.089 Found net devices under 0000:31:00.1: cvl_0_1 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:06.089 19:30:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:06.089 19:30:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:06.089 19:30:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:06.089 19:30:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:06.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:06.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:15:06.089 00:15:06.089 --- 10.0.0.2 ping statistics --- 00:15:06.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.089 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:15:06.089 19:30:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:06.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:06.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:15:06.089 00:15:06.089 --- 10.0.0.1 ping statistics --- 00:15:06.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.089 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:15:06.089 19:30:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:06.089 19:30:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:15:06.089 19:30:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:06.089 19:30:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:06.089 19:30:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:06.090 19:30:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:06.090 19:30:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:06.090 19:30:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:06.090 19:30:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:06.090 19:30:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:06.090 19:30:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:06.090 19:30:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:06.090 19:30:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:06.090 19:30:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3526577 00:15:06.090 19:30:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3526577 00:15:06.090 19:30:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:06.090 19:30:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 3526577 ']' 00:15:06.090 19:30:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.090 19:30:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:06.090 19:30:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.090 19:30:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:06.090 19:30:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:06.090 [2024-05-15 19:30:32.226563] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:15:06.090 [2024-05-15 19:30:32.226630] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.090 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.350 [2024-05-15 19:30:32.319952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:06.351 [2024-05-15 19:30:32.414533] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.351 [2024-05-15 19:30:32.414589] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.351 [2024-05-15 19:30:32.414597] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.351 [2024-05-15 19:30:32.414604] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.351 [2024-05-15 19:30:32.414610] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.351 [2024-05-15 19:30:32.414750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.351 [2024-05-15 19:30:32.414881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.351 [2024-05-15 19:30:32.414884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.930 19:30:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:06.930 19:30:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:15:06.930 19:30:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:06.930 19:30:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:06.930 19:30:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:07.190 19:30:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.190 19:30:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:07.190 [2024-05-15 19:30:33.341745] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.190 19:30:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:07.450 19:30:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:07.450 19:30:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:07.711 19:30:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:07.711 19:30:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:07.970 19:30:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:08.229 19:30:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f770f163-cb04-4985-970d-e3202396a55c 00:15:08.229 19:30:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f770f163-cb04-4985-970d-e3202396a55c lvol 20 00:15:08.488 19:30:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e4036f82-756c-476d-a22f-1e4180616f28 00:15:08.488 19:30:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:08.747 19:30:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e4036f82-756c-476d-a22f-1e4180616f28 00:15:08.747 19:30:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:09.008 [2024-05-15 19:30:35.094633] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:09.008 [2024-05-15 19:30:35.094885] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.008 19:30:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:09.268 19:30:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3527152 00:15:09.268 19:30:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:09.268 19:30:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:09.268 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.208 19:30:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e4036f82-756c-476d-a22f-1e4180616f28 MY_SNAPSHOT 00:15:10.468 19:30:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2f178fbb-cdfc-483d-acdd-63c0f08330cb 00:15:10.468 19:30:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e4036f82-756c-476d-a22f-1e4180616f28 30 00:15:10.729 19:30:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2f178fbb-cdfc-483d-acdd-63c0f08330cb MY_CLONE 00:15:10.989 19:30:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c12bd180-faf4-4a3f-b2c1-c1f6ce5317d2 00:15:10.989 19:30:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c12bd180-faf4-4a3f-b2c1-c1f6ce5317d2 00:15:11.558 19:30:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3527152 00:15:19.690 Initializing NVMe Controllers 00:15:19.690 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:19.690 Controller IO queue size 128, less than required. 00:15:19.690 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:19.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:19.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:19.690 Initialization complete. Launching workers. 00:15:19.690 ======================================================== 00:15:19.690 Latency(us) 00:15:19.690 Device Information : IOPS MiB/s Average min max 00:15:19.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12433.60 48.57 10298.77 1418.65 51164.96 00:15:19.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12529.00 48.94 10217.43 3709.27 67518.37 00:15:19.690 ======================================================== 00:15:19.690 Total : 24962.60 97.51 10257.95 1418.65 67518.37 00:15:19.690 00:15:19.690 19:30:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:19.690 19:30:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e4036f82-756c-476d-a22f-1e4180616f28 00:15:19.951 19:30:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f770f163-cb04-4985-970d-e3202396a55c 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:20.211 rmmod nvme_tcp 00:15:20.211 rmmod nvme_fabrics 00:15:20.211 rmmod nvme_keyring 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3526577 ']' 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3526577 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 3526577 ']' 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 3526577 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3526577 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3526577' 00:15:20.211 killing process with pid 3526577 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 3526577 00:15:20.211 [2024-05-15 19:30:46.352908] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:20.211 19:30:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 3526577 00:15:20.479 19:30:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:20.480 19:30:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:20.480 19:30:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:20.480 19:30:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:20.480 19:30:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:20.480 19:30:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.480 19:30:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.480 19:30:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.495 19:30:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:22.495 00:15:22.495 real 0m24.849s 00:15:22.495 user 1m6.355s 00:15:22.495 sys 0m8.613s 00:15:22.495 19:30:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:22.495 19:30:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:22.495 ************************************ 00:15:22.495 END TEST nvmf_lvol 00:15:22.495 ************************************ 00:15:22.495 19:30:48 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:22.495 19:30:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:22.495 19:30:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:22.495 19:30:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:22.495 ************************************ 00:15:22.495 START TEST nvmf_lvs_grow 00:15:22.495 ************************************ 00:15:22.495 19:30:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:22.791 * Looking for test storage... 00:15:22.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:22.791 19:30:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:22.791 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:15:22.791 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:22.791 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:22.791 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:22.791 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:22.791 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:22.791 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:22.791 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:15:22.792 19:30:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:30.936 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:30.937 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:30.937 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:30.937 Found net devices under 0000:31:00.0: cvl_0_0 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:30.937 Found net devices under 0000:31:00.1: cvl_0_1 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:30.937 19:30:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:30.937 19:30:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:30.937 19:30:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:30.937 19:30:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:30.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:15:30.937 00:15:30.937 --- 10.0.0.2 ping statistics --- 00:15:30.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.937 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:15:30.937 19:30:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:30.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:15:30.937 00:15:30.937 --- 10.0.0.1 ping statistics --- 00:15:30.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.937 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:15:30.937 19:30:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.937 19:30:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:15:30.937 19:30:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:30.937 19:30:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.937 19:30:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:30.937 19:30:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:30.937 19:30:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.937 19:30:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:30.937 19:30:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:31.198 19:30:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:15:31.198 19:30:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:31.198 19:30:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:31.198 19:30:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:31.198 19:30:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3533985 00:15:31.198 19:30:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3533985 00:15:31.198 19:30:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:31.198 19:30:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 3533985 ']' 00:15:31.198 19:30:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.198 19:30:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:31.198 19:30:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.198 19:30:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:31.198 19:30:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:31.198 [2024-05-15 19:30:57.203917] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:15:31.198 [2024-05-15 19:30:57.203983] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.198 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.198 [2024-05-15 19:30:57.299718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.459 [2024-05-15 19:30:57.393096] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.459 [2024-05-15 19:30:57.393156] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.459 [2024-05-15 19:30:57.393164] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:31.459 [2024-05-15 19:30:57.393171] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:31.459 [2024-05-15 19:30:57.393178] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.459 [2024-05-15 19:30:57.393203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.030 19:30:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:32.030 19:30:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:15:32.030 19:30:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:32.030 19:30:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:32.030 19:30:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:32.030 19:30:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.030 19:30:58 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:32.290 [2024-05-15 19:30:58.325996] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.290 19:30:58 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:15:32.290 19:30:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:32.290 19:30:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:32.290 19:30:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:32.291 ************************************ 00:15:32.291 START TEST lvs_grow_clean 00:15:32.291 ************************************ 00:15:32.291 19:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:15:32.291 19:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:32.291 19:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:32.291 19:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:32.291 19:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:32.291 19:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:32.291 19:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:32.291 19:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:32.291 19:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:32.291 19:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:32.552 19:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:32.552 19:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:32.812 19:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3feb8ac2-5525-4764-a1f1-b2f2b45147fa 00:15:32.812 19:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3feb8ac2-5525-4764-a1f1-b2f2b45147fa 00:15:32.812 19:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:33.073 19:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:33.073 19:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:33.073 19:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3feb8ac2-5525-4764-a1f1-b2f2b45147fa lvol 150 00:15:33.332 19:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=29e45ca9-d654-4c09-828d-4af98f5a1e53 00:15:33.332 19:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:33.332 19:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:33.332 [2024-05-15 19:30:59.487446] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:33.332 [2024-05-15 19:30:59.487515] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:33.332 true 00:15:33.332 19:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:33.332 19:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3feb8ac2-5525-4764-a1f1-b2f2b45147fa 00:15:33.593 19:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:33.593 19:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:33.854 19:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 29e45ca9-d654-4c09-828d-4af98f5a1e53 00:15:34.115 19:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:34.376 [2024-05-15 19:31:00.361817] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:34.376 [2024-05-15 19:31:00.362158] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.376 19:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:34.637 19:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:34.637 19:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3534695 00:15:34.637 19:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:34.637 19:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3534695 /var/tmp/bdevperf.sock 00:15:34.637 19:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 3534695 ']' 00:15:34.637 19:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:34.637 19:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:34.637 19:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:34.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:34.637 19:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:34.637 19:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:34.637 [2024-05-15 19:31:00.638423] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:15:34.637 [2024-05-15 19:31:00.638494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3534695 ] 00:15:34.637 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.637 [2024-05-15 19:31:00.709799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.637 [2024-05-15 19:31:00.782764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.898 19:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:34.898 19:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:15:34.898 19:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:35.157 Nvme0n1 00:15:35.157 19:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:35.417 [ 00:15:35.417 { 00:15:35.417 "name": "Nvme0n1", 00:15:35.417 "aliases": [ 00:15:35.417 "29e45ca9-d654-4c09-828d-4af98f5a1e53" 00:15:35.417 ], 00:15:35.417 "product_name": "NVMe disk", 00:15:35.417 "block_size": 4096, 00:15:35.417 "num_blocks": 38912, 00:15:35.417 "uuid": "29e45ca9-d654-4c09-828d-4af98f5a1e53", 00:15:35.417 "assigned_rate_limits": { 00:15:35.417 "rw_ios_per_sec": 0, 00:15:35.417 "rw_mbytes_per_sec": 0, 00:15:35.417 "r_mbytes_per_sec": 0, 00:15:35.417 "w_mbytes_per_sec": 0 00:15:35.417 }, 00:15:35.417 "claimed": false, 00:15:35.417 "zoned": false, 00:15:35.417 "supported_io_types": { 00:15:35.417 "read": true, 00:15:35.417 "write": true, 00:15:35.417 "unmap": true, 00:15:35.417 "write_zeroes": true, 00:15:35.417 "flush": true, 00:15:35.417 "reset": true, 00:15:35.417 "compare": true, 00:15:35.417 "compare_and_write": true, 00:15:35.417 "abort": true, 00:15:35.417 "nvme_admin": true, 00:15:35.417 "nvme_io": true 00:15:35.417 }, 00:15:35.417 "memory_domains": [ 00:15:35.417 { 00:15:35.417 "dma_device_id": "system", 00:15:35.417 "dma_device_type": 1 00:15:35.417 } 00:15:35.417 ], 00:15:35.417 "driver_specific": { 00:15:35.417 "nvme": [ 00:15:35.417 { 00:15:35.417 "trid": { 00:15:35.417 "trtype": "TCP", 00:15:35.417 "adrfam": "IPv4", 00:15:35.417 "traddr": "10.0.0.2", 00:15:35.417 "trsvcid": "4420", 00:15:35.417 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:35.417 }, 00:15:35.417 "ctrlr_data": { 00:15:35.417 "cntlid": 1, 00:15:35.417 "vendor_id": "0x8086", 00:15:35.417 "model_number": "SPDK bdev Controller", 00:15:35.417 "serial_number": "SPDK0", 00:15:35.417 "firmware_revision": "24.05", 00:15:35.417 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:35.417 "oacs": { 00:15:35.417 "security": 0, 00:15:35.417 "format": 0, 00:15:35.417 "firmware": 0, 00:15:35.417 "ns_manage": 0 00:15:35.417 }, 00:15:35.417 "multi_ctrlr": true, 00:15:35.417 "ana_reporting": false 00:15:35.417 }, 00:15:35.417 "vs": { 00:15:35.417 "nvme_version": "1.3" 00:15:35.417 }, 00:15:35.417 "ns_data": { 00:15:35.417 "id": 1, 00:15:35.417 "can_share": true 00:15:35.417 } 00:15:35.417 } 00:15:35.417 ], 00:15:35.417 "mp_policy": "active_passive" 00:15:35.417 } 00:15:35.417 } 00:15:35.417 ] 00:15:35.417 19:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3534841 00:15:35.417 19:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:35.417 19:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:35.417 Running I/O for 10 seconds... 00:15:36.800 Latency(us) 00:15:36.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:36.800 Nvme0n1 : 1.00 18068.00 70.58 0.00 0.00 0.00 0.00 0.00 00:15:36.800 =================================================================================================================== 00:15:36.800 Total : 18068.00 70.58 0.00 0.00 0.00 0.00 0.00 00:15:36.800 00:15:37.370 19:31:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3feb8ac2-5525-4764-a1f1-b2f2b45147fa 00:15:37.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:37.630 Nvme0n1 : 2.00 18213.00 71.14 0.00 0.00 0.00 0.00 0.00 00:15:37.630 =================================================================================================================== 00:15:37.630 Total : 18213.00 71.14 0.00 0.00 0.00 0.00 0.00 00:15:37.630 00:15:37.630 true 00:15:37.630 19:31:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3feb8ac2-5525-4764-a1f1-b2f2b45147fa 00:15:37.630 19:31:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:37.890 19:31:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:37.890 19:31:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:37.890 19:31:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3534841 00:15:38.460 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:38.460 Nvme0n1 : 3.00 18264.33 71.35 0.00 0.00 0.00 0.00 0.00 00:15:38.460 =================================================================================================================== 00:15:38.460 Total : 18264.33 71.35 0.00 0.00 0.00 0.00 0.00 00:15:38.460 00:15:39.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:39.400 Nvme0n1 : 4.00 18290.50 71.45 0.00 0.00 0.00 0.00 0.00 00:15:39.400 =================================================================================================================== 00:15:39.400 Total : 18290.50 71.45 0.00 0.00 0.00 0.00 0.00 00:15:39.400 00:15:40.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:40.782 Nvme0n1 : 5.00 18331.40 71.61 0.00 0.00 0.00 0.00 0.00 00:15:40.782 =================================================================================================================== 00:15:40.782 Total : 18331.40 71.61 0.00 0.00 0.00 0.00 0.00 00:15:40.782 00:15:41.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:41.723 Nvme0n1 : 6.00 18358.83 71.71 0.00 0.00 0.00 0.00 0.00 00:15:41.723 =================================================================================================================== 00:15:41.723 Total : 18358.83 71.71 0.00 0.00 0.00 0.00 0.00 00:15:41.723 00:15:42.662 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:42.662 Nvme0n1 : 7.00 18378.43 71.79 0.00 0.00 0.00 0.00 0.00 00:15:42.662 =================================================================================================================== 00:15:42.663 Total : 18378.43 71.79 0.00 0.00 0.00 0.00 0.00 00:15:42.663 00:15:43.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:43.604 Nvme0n1 : 8.00 18393.12 71.85 0.00 0.00 0.00 0.00 0.00 00:15:43.604 =================================================================================================================== 00:15:43.604 Total : 18393.12 71.85 0.00 0.00 0.00 0.00 0.00 00:15:43.604 00:15:44.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:44.546 Nvme0n1 : 9.00 18404.56 71.89 0.00 0.00 0.00 0.00 0.00 00:15:44.546 =================================================================================================================== 00:15:44.546 Total : 18404.56 71.89 0.00 0.00 0.00 0.00 0.00 00:15:44.546 00:15:45.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:45.487 Nvme0n1 : 10.00 18413.70 71.93 0.00 0.00 0.00 0.00 0.00 00:15:45.487 =================================================================================================================== 00:15:45.487 Total : 18413.70 71.93 0.00 0.00 0.00 0.00 0.00 00:15:45.487 00:15:45.487 00:15:45.487 Latency(us) 00:15:45.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:45.487 Nvme0n1 : 10.01 18412.68 71.92 0.00 0.00 6946.71 4423.68 16165.55 00:15:45.487 =================================================================================================================== 00:15:45.487 Total : 18412.68 71.92 0.00 0.00 6946.71 4423.68 16165.55 00:15:45.487 0 00:15:45.487 19:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3534695 00:15:45.487 19:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 3534695 ']' 00:15:45.487 19:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 3534695 00:15:45.487 19:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:15:45.487 19:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:45.487 19:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3534695 00:15:45.747 19:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:45.748 19:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:45.748 19:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3534695' 00:15:45.748 killing process with pid 3534695 00:15:45.748 19:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 3534695 00:15:45.748 Received shutdown signal, test time was about 10.000000 seconds 00:15:45.748 00:15:45.748 Latency(us) 00:15:45.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.748 =================================================================================================================== 00:15:45.748 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:45.748 19:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 3534695 00:15:45.748 19:31:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:46.009 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:46.270 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3feb8ac2-5525-4764-a1f1-b2f2b45147fa 00:15:46.270 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:46.530 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:46.530 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:46.530 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:46.530 [2024-05-15 19:31:12.649920] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:46.530 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3feb8ac2-5525-4764-a1f1-b2f2b45147fa 00:15:46.530 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:15:46.530 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3feb8ac2-5525-4764-a1f1-b2f2b45147fa 00:15:46.530 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:46.530 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.530 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:46.530 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.530 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:46.530 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.530 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:46.530 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:46.530 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3feb8ac2-5525-4764-a1f1-b2f2b45147fa 00:15:46.790 request: 00:15:46.790 { 00:15:46.790 "uuid": "3feb8ac2-5525-4764-a1f1-b2f2b45147fa", 00:15:46.790 "method": "bdev_lvol_get_lvstores", 00:15:46.790 "req_id": 1 00:15:46.790 } 00:15:46.790 Got JSON-RPC error response 00:15:46.790 response: 00:15:46.790 { 00:15:46.790 "code": -19, 00:15:46.790 "message": "No such device" 00:15:46.790 } 00:15:46.790 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:15:46.790 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:46.790 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:46.790 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:46.790 19:31:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:47.051 aio_bdev 00:15:47.051 19:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 29e45ca9-d654-4c09-828d-4af98f5a1e53 00:15:47.051 19:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=29e45ca9-d654-4c09-828d-4af98f5a1e53 00:15:47.051 19:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:47.051 19:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:15:47.051 19:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:47.051 19:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:47.051 19:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:47.311 19:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 29e45ca9-d654-4c09-828d-4af98f5a1e53 -t 2000 00:15:47.571 [ 00:15:47.571 { 00:15:47.571 "name": "29e45ca9-d654-4c09-828d-4af98f5a1e53", 00:15:47.571 "aliases": [ 00:15:47.571 "lvs/lvol" 00:15:47.571 ], 00:15:47.571 "product_name": "Logical Volume", 00:15:47.571 "block_size": 4096, 00:15:47.571 "num_blocks": 38912, 00:15:47.571 "uuid": "29e45ca9-d654-4c09-828d-4af98f5a1e53", 00:15:47.571 "assigned_rate_limits": { 00:15:47.571 "rw_ios_per_sec": 0, 00:15:47.571 "rw_mbytes_per_sec": 0, 00:15:47.571 "r_mbytes_per_sec": 0, 00:15:47.571 "w_mbytes_per_sec": 0 00:15:47.571 }, 00:15:47.571 "claimed": false, 00:15:47.571 "zoned": false, 00:15:47.571 "supported_io_types": { 00:15:47.571 "read": true, 00:15:47.571 "write": true, 00:15:47.571 "unmap": true, 00:15:47.571 "write_zeroes": true, 00:15:47.571 "flush": false, 00:15:47.571 "reset": true, 00:15:47.571 "compare": false, 00:15:47.571 "compare_and_write": false, 00:15:47.571 "abort": false, 00:15:47.571 "nvme_admin": false, 00:15:47.571 "nvme_io": false 00:15:47.571 }, 00:15:47.571 "driver_specific": { 00:15:47.571 "lvol": { 00:15:47.571 "lvol_store_uuid": "3feb8ac2-5525-4764-a1f1-b2f2b45147fa", 00:15:47.571 "base_bdev": "aio_bdev", 00:15:47.571 "thin_provision": false, 00:15:47.571 "num_allocated_clusters": 38, 00:15:47.571 "snapshot": false, 00:15:47.571 "clone": false, 00:15:47.571 "esnap_clone": false 00:15:47.571 } 00:15:47.571 } 00:15:47.571 } 00:15:47.571 ] 00:15:47.572 19:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:15:47.572 19:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3feb8ac2-5525-4764-a1f1-b2f2b45147fa 00:15:47.572 19:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:47.572 19:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:47.572 19:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3feb8ac2-5525-4764-a1f1-b2f2b45147fa 00:15:47.572 19:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:47.832 19:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:47.832 19:31:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 29e45ca9-d654-4c09-828d-4af98f5a1e53 00:15:48.092 19:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3feb8ac2-5525-4764-a1f1-b2f2b45147fa 00:15:48.352 19:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:48.613 19:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:48.613 00:15:48.613 real 0m16.176s 00:15:48.613 user 0m15.838s 00:15:48.613 sys 0m1.424s 00:15:48.613 19:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:48.613 19:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:48.613 ************************************ 00:15:48.613 END TEST lvs_grow_clean 00:15:48.613 ************************************ 00:15:48.613 19:31:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:48.613 19:31:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:48.613 19:31:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:48.613 19:31:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:48.613 ************************************ 00:15:48.613 START TEST lvs_grow_dirty 00:15:48.613 ************************************ 00:15:48.613 19:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:15:48.613 19:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:48.613 19:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:48.613 19:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:48.613 19:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:48.613 19:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:48.613 19:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:48.613 19:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:48.613 19:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:48.613 19:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:48.873 19:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:48.873 19:31:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:49.133 19:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=57cb1f45-8990-4b60-9f65-d3fff4ec9fc0 00:15:49.133 19:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57cb1f45-8990-4b60-9f65-d3fff4ec9fc0 00:15:49.133 19:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:49.133 19:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:49.133 19:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:49.133 19:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 57cb1f45-8990-4b60-9f65-d3fff4ec9fc0 lvol 150 00:15:49.393 19:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c3e41fa2-81e0-4b45-ba15-ac3cbf3f0295 00:15:49.393 19:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:49.393 19:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:49.653 [2024-05-15 19:31:15.686343] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:49.653 [2024-05-15 19:31:15.686394] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:49.653 true 00:15:49.653 19:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57cb1f45-8990-4b60-9f65-d3fff4ec9fc0 00:15:49.653 19:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:49.913 19:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:49.913 19:31:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:50.173 19:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c3e41fa2-81e0-4b45-ba15-ac3cbf3f0295 00:15:50.174 19:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:50.433 [2024-05-15 19:31:16.496729] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.433 19:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:50.694 19:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:50.694 19:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3537790 00:15:50.694 19:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:50.694 19:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3537790 /var/tmp/bdevperf.sock 00:15:50.694 19:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3537790 ']' 00:15:50.694 19:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:50.694 19:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:50.694 19:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:50.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:50.694 19:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:50.694 19:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:50.694 [2024-05-15 19:31:16.739938] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:15:50.694 [2024-05-15 19:31:16.739986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3537790 ] 00:15:50.694 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.694 [2024-05-15 19:31:16.805059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.694 [2024-05-15 19:31:16.869260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.955 19:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:50.955 19:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:15:50.955 19:31:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:51.215 Nvme0n1 00:15:51.215 19:31:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:51.474 [ 00:15:51.474 { 00:15:51.474 "name": "Nvme0n1", 00:15:51.474 "aliases": [ 00:15:51.474 "c3e41fa2-81e0-4b45-ba15-ac3cbf3f0295" 00:15:51.474 ], 00:15:51.474 "product_name": "NVMe disk", 00:15:51.474 "block_size": 4096, 00:15:51.474 "num_blocks": 38912, 00:15:51.474 "uuid": "c3e41fa2-81e0-4b45-ba15-ac3cbf3f0295", 00:15:51.474 "assigned_rate_limits": { 00:15:51.474 "rw_ios_per_sec": 0, 00:15:51.474 "rw_mbytes_per_sec": 0, 00:15:51.474 "r_mbytes_per_sec": 0, 00:15:51.474 "w_mbytes_per_sec": 0 00:15:51.474 }, 00:15:51.474 "claimed": false, 00:15:51.474 "zoned": false, 00:15:51.474 "supported_io_types": { 00:15:51.474 "read": true, 00:15:51.474 "write": true, 00:15:51.474 "unmap": true, 00:15:51.474 "write_zeroes": true, 00:15:51.474 "flush": true, 00:15:51.474 "reset": true, 00:15:51.474 "compare": true, 00:15:51.474 "compare_and_write": true, 00:15:51.474 "abort": true, 00:15:51.474 "nvme_admin": true, 00:15:51.474 "nvme_io": true 00:15:51.474 }, 00:15:51.474 "memory_domains": [ 00:15:51.474 { 00:15:51.474 "dma_device_id": "system", 00:15:51.474 "dma_device_type": 1 00:15:51.474 } 00:15:51.474 ], 00:15:51.474 "driver_specific": { 00:15:51.474 "nvme": [ 00:15:51.474 { 00:15:51.474 "trid": { 00:15:51.474 "trtype": "TCP", 00:15:51.474 "adrfam": "IPv4", 00:15:51.474 "traddr": "10.0.0.2", 00:15:51.474 "trsvcid": "4420", 00:15:51.474 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:51.474 }, 00:15:51.474 "ctrlr_data": { 00:15:51.474 "cntlid": 1, 00:15:51.474 "vendor_id": "0x8086", 00:15:51.474 "model_number": "SPDK bdev Controller", 00:15:51.474 "serial_number": "SPDK0", 00:15:51.474 "firmware_revision": "24.05", 00:15:51.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:51.474 "oacs": { 00:15:51.474 "security": 0, 00:15:51.474 "format": 0, 00:15:51.474 "firmware": 0, 00:15:51.474 "ns_manage": 0 00:15:51.474 }, 00:15:51.474 "multi_ctrlr": true, 00:15:51.474 "ana_reporting": false 00:15:51.474 }, 00:15:51.474 "vs": { 00:15:51.474 "nvme_version": "1.3" 00:15:51.474 }, 00:15:51.474 "ns_data": { 00:15:51.474 "id": 1, 00:15:51.474 "can_share": true 00:15:51.474 } 00:15:51.474 } 00:15:51.474 ], 00:15:51.474 "mp_policy": "active_passive" 00:15:51.474 } 00:15:51.474 } 00:15:51.474 ] 00:15:51.474 19:31:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3538115 00:15:51.474 19:31:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:51.474 19:31:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:51.474 Running I/O for 10 seconds... 00:15:52.414 Latency(us) 00:15:52.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:52.414 Nvme0n1 : 1.00 17994.00 70.29 0.00 0.00 0.00 0.00 0.00 00:15:52.414 =================================================================================================================== 00:15:52.414 Total : 17994.00 70.29 0.00 0.00 0.00 0.00 0.00 00:15:52.414 00:15:53.370 19:31:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 57cb1f45-8990-4b60-9f65-d3fff4ec9fc0 00:15:53.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:53.676 Nvme0n1 : 2.00 18116.50 70.77 0.00 0.00 0.00 0.00 0.00 00:15:53.676 =================================================================================================================== 00:15:53.676 Total : 18116.50 70.77 0.00 0.00 0.00 0.00 0.00 00:15:53.676 00:15:53.676 true 00:15:53.676 19:31:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57cb1f45-8990-4b60-9f65-d3fff4ec9fc0 00:15:53.676 19:31:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:53.940 19:31:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:53.940 19:31:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:53.940 19:31:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3538115 00:15:54.510 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:54.510 Nvme0n1 : 3.00 18115.00 70.76 0.00 0.00 0.00 0.00 0.00 00:15:54.510 =================================================================================================================== 00:15:54.510 Total : 18115.00 70.76 0.00 0.00 0.00 0.00 0.00 00:15:54.510 00:15:55.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:55.450 Nvme0n1 : 4.00 18150.50 70.90 0.00 0.00 0.00 0.00 0.00 00:15:55.450 =================================================================================================================== 00:15:55.450 Total : 18150.50 70.90 0.00 0.00 0.00 0.00 0.00 00:15:55.450 00:15:56.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:56.833 Nvme0n1 : 5.00 18181.20 71.02 0.00 0.00 0.00 0.00 0.00 00:15:56.833 =================================================================================================================== 00:15:56.833 Total : 18181.20 71.02 0.00 0.00 0.00 0.00 0.00 00:15:56.833 00:15:57.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:57.403 Nvme0n1 : 6.00 18209.50 71.13 0.00 0.00 0.00 0.00 0.00 00:15:57.403 =================================================================================================================== 00:15:57.403 Total : 18209.50 71.13 0.00 0.00 0.00 0.00 0.00 00:15:57.403 00:15:58.783 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:58.783 Nvme0n1 : 7.00 18223.00 71.18 0.00 0.00 0.00 0.00 0.00 00:15:58.783 =================================================================================================================== 00:15:58.783 Total : 18223.00 71.18 0.00 0.00 0.00 0.00 0.00 00:15:58.783 00:15:59.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:59.723 Nvme0n1 : 8.00 18249.00 71.29 0.00 0.00 0.00 0.00 0.00 00:15:59.724 =================================================================================================================== 00:15:59.724 Total : 18249.00 71.29 0.00 0.00 0.00 0.00 0.00 00:15:59.724 00:16:00.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:00.666 Nvme0n1 : 9.00 18262.22 71.34 0.00 0.00 0.00 0.00 0.00 00:16:00.666 =================================================================================================================== 00:16:00.666 Total : 18262.22 71.34 0.00 0.00 0.00 0.00 0.00 00:16:00.666 00:16:01.607 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:01.607 Nvme0n1 : 10.00 18272.70 71.38 0.00 0.00 0.00 0.00 0.00 00:16:01.607 =================================================================================================================== 00:16:01.607 Total : 18272.70 71.38 0.00 0.00 0.00 0.00 0.00 00:16:01.607 00:16:01.607 00:16:01.607 Latency(us) 00:16:01.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.607 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:01.607 Nvme0n1 : 10.01 18272.42 71.38 0.00 0.00 7001.23 4341.76 13926.40 00:16:01.607 =================================================================================================================== 00:16:01.607 Total : 18272.42 71.38 0.00 0.00 7001.23 4341.76 13926.40 00:16:01.607 0 00:16:01.607 19:31:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3537790 00:16:01.607 19:31:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 3537790 ']' 00:16:01.607 19:31:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 3537790 00:16:01.607 19:31:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:16:01.607 19:31:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:01.607 19:31:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3537790 00:16:01.607 19:31:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:01.607 19:31:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:01.607 19:31:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3537790' 00:16:01.607 killing process with pid 3537790 00:16:01.607 19:31:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 3537790 00:16:01.607 Received shutdown signal, test time was about 10.000000 seconds 00:16:01.607 00:16:01.607 Latency(us) 00:16:01.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.607 =================================================================================================================== 00:16:01.607 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:01.607 19:31:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 3537790 00:16:01.868 19:31:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:01.868 19:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:02.128 19:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57cb1f45-8990-4b60-9f65-d3fff4ec9fc0 00:16:02.128 19:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:02.389 19:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:02.389 19:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:16:02.389 19:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3533985 00:16:02.389 19:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3533985 00:16:02.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3533985 Killed "${NVMF_APP[@]}" "$@" 00:16:02.389 19:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:16:02.389 19:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:16:02.389 19:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:02.389 19:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:02.389 19:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:02.389 19:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3540143 00:16:02.389 19:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3540143 00:16:02.389 19:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:02.389 19:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3540143 ']' 00:16:02.389 19:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.389 19:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:02.389 19:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.389 19:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:02.389 19:31:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:02.389 [2024-05-15 19:31:28.516500] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:16:02.389 [2024-05-15 19:31:28.516554] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.389 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.649 [2024-05-15 19:31:28.608109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.649 [2024-05-15 19:31:28.680643] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.649 [2024-05-15 19:31:28.680678] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.649 [2024-05-15 19:31:28.680686] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:02.649 [2024-05-15 19:31:28.680692] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:02.649 [2024-05-15 19:31:28.680698] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.649 [2024-05-15 19:31:28.680721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.221 19:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:03.221 19:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:16:03.221 19:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:03.221 19:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:03.221 19:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:03.481 19:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.481 19:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:03.481 [2024-05-15 19:31:29.598334] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:03.481 [2024-05-15 19:31:29.598421] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:03.481 [2024-05-15 19:31:29.598453] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:03.481 19:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:16:03.481 19:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c3e41fa2-81e0-4b45-ba15-ac3cbf3f0295 00:16:03.481 19:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=c3e41fa2-81e0-4b45-ba15-ac3cbf3f0295 00:16:03.481 19:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:03.481 19:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:16:03.481 19:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:03.481 19:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:03.481 19:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:03.741 19:31:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c3e41fa2-81e0-4b45-ba15-ac3cbf3f0295 -t 2000 00:16:04.002 [ 00:16:04.002 { 00:16:04.002 "name": "c3e41fa2-81e0-4b45-ba15-ac3cbf3f0295", 00:16:04.002 "aliases": [ 00:16:04.002 "lvs/lvol" 00:16:04.002 ], 00:16:04.002 "product_name": "Logical Volume", 00:16:04.002 "block_size": 4096, 00:16:04.002 "num_blocks": 38912, 00:16:04.002 "uuid": "c3e41fa2-81e0-4b45-ba15-ac3cbf3f0295", 00:16:04.002 "assigned_rate_limits": { 00:16:04.002 "rw_ios_per_sec": 0, 00:16:04.002 "rw_mbytes_per_sec": 0, 00:16:04.002 "r_mbytes_per_sec": 0, 00:16:04.002 "w_mbytes_per_sec": 0 00:16:04.002 }, 00:16:04.002 "claimed": false, 00:16:04.002 "zoned": false, 00:16:04.002 "supported_io_types": { 00:16:04.002 "read": true, 00:16:04.002 "write": true, 00:16:04.002 "unmap": true, 00:16:04.002 "write_zeroes": true, 00:16:04.002 "flush": false, 00:16:04.002 "reset": true, 00:16:04.002 "compare": false, 00:16:04.002 "compare_and_write": false, 00:16:04.002 "abort": false, 00:16:04.002 "nvme_admin": false, 00:16:04.002 "nvme_io": false 00:16:04.002 }, 00:16:04.002 "driver_specific": { 00:16:04.002 "lvol": { 00:16:04.002 "lvol_store_uuid": "57cb1f45-8990-4b60-9f65-d3fff4ec9fc0", 00:16:04.002 "base_bdev": "aio_bdev", 00:16:04.002 "thin_provision": false, 00:16:04.002 "num_allocated_clusters": 38, 00:16:04.002 "snapshot": false, 00:16:04.002 "clone": false, 00:16:04.002 "esnap_clone": false 00:16:04.002 } 00:16:04.002 } 00:16:04.002 } 00:16:04.002 ] 00:16:04.002 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:16:04.002 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57cb1f45-8990-4b60-9f65-d3fff4ec9fc0 00:16:04.002 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:16:04.264 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:16:04.264 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57cb1f45-8990-4b60-9f65-d3fff4ec9fc0 00:16:04.264 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:16:04.264 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:16:04.264 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:04.525 [2024-05-15 19:31:30.603264] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:04.525 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57cb1f45-8990-4b60-9f65-d3fff4ec9fc0 00:16:04.525 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:16:04.525 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57cb1f45-8990-4b60-9f65-d3fff4ec9fc0 00:16:04.525 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:04.525 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:04.525 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:04.525 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:04.525 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:04.525 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:04.525 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:04.525 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:04.525 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57cb1f45-8990-4b60-9f65-d3fff4ec9fc0 00:16:04.785 request: 00:16:04.785 { 00:16:04.785 "uuid": "57cb1f45-8990-4b60-9f65-d3fff4ec9fc0", 00:16:04.785 "method": "bdev_lvol_get_lvstores", 00:16:04.785 "req_id": 1 00:16:04.785 } 00:16:04.785 Got JSON-RPC error response 00:16:04.785 response: 00:16:04.785 { 00:16:04.786 "code": -19, 00:16:04.786 "message": "No such device" 00:16:04.786 } 00:16:04.786 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:16:04.786 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:04.786 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:04.786 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:04.786 19:31:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:05.046 aio_bdev 00:16:05.046 19:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c3e41fa2-81e0-4b45-ba15-ac3cbf3f0295 00:16:05.046 19:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=c3e41fa2-81e0-4b45-ba15-ac3cbf3f0295 00:16:05.046 19:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:05.046 19:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:16:05.046 19:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:05.046 19:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:05.047 19:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:05.308 19:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c3e41fa2-81e0-4b45-ba15-ac3cbf3f0295 -t 2000 00:16:05.308 [ 00:16:05.308 { 00:16:05.308 "name": "c3e41fa2-81e0-4b45-ba15-ac3cbf3f0295", 00:16:05.308 "aliases": [ 00:16:05.308 "lvs/lvol" 00:16:05.308 ], 00:16:05.308 "product_name": "Logical Volume", 00:16:05.308 "block_size": 4096, 00:16:05.308 "num_blocks": 38912, 00:16:05.308 "uuid": "c3e41fa2-81e0-4b45-ba15-ac3cbf3f0295", 00:16:05.308 "assigned_rate_limits": { 00:16:05.308 "rw_ios_per_sec": 0, 00:16:05.308 "rw_mbytes_per_sec": 0, 00:16:05.308 "r_mbytes_per_sec": 0, 00:16:05.308 "w_mbytes_per_sec": 0 00:16:05.308 }, 00:16:05.308 "claimed": false, 00:16:05.308 "zoned": false, 00:16:05.308 "supported_io_types": { 00:16:05.308 "read": true, 00:16:05.308 "write": true, 00:16:05.308 "unmap": true, 00:16:05.308 "write_zeroes": true, 00:16:05.308 "flush": false, 00:16:05.308 "reset": true, 00:16:05.308 "compare": false, 00:16:05.308 "compare_and_write": false, 00:16:05.308 "abort": false, 00:16:05.308 "nvme_admin": false, 00:16:05.308 "nvme_io": false 00:16:05.308 }, 00:16:05.308 "driver_specific": { 00:16:05.308 "lvol": { 00:16:05.308 "lvol_store_uuid": "57cb1f45-8990-4b60-9f65-d3fff4ec9fc0", 00:16:05.308 "base_bdev": "aio_bdev", 00:16:05.308 "thin_provision": false, 00:16:05.308 "num_allocated_clusters": 38, 00:16:05.308 "snapshot": false, 00:16:05.308 "clone": false, 00:16:05.308 "esnap_clone": false 00:16:05.308 } 00:16:05.308 } 00:16:05.308 } 00:16:05.308 ] 00:16:05.308 19:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:16:05.308 19:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57cb1f45-8990-4b60-9f65-d3fff4ec9fc0 00:16:05.308 19:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:05.568 19:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:05.568 19:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57cb1f45-8990-4b60-9f65-d3fff4ec9fc0 00:16:05.568 19:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:05.828 19:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:05.828 19:31:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c3e41fa2-81e0-4b45-ba15-ac3cbf3f0295 00:16:06.089 19:31:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 57cb1f45-8990-4b60-9f65-d3fff4ec9fc0 00:16:06.349 19:31:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:06.349 19:31:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:06.349 00:16:06.349 real 0m17.849s 00:16:06.349 user 0m46.525s 00:16:06.349 sys 0m2.950s 00:16:06.349 19:31:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:06.349 19:31:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:06.349 ************************************ 00:16:06.349 END TEST lvs_grow_dirty 00:16:06.349 ************************************ 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:06.611 nvmf_trace.0 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:06.611 rmmod nvme_tcp 00:16:06.611 rmmod nvme_fabrics 00:16:06.611 rmmod nvme_keyring 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3540143 ']' 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3540143 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 3540143 ']' 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 3540143 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3540143 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3540143' 00:16:06.611 killing process with pid 3540143 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 3540143 00:16:06.611 19:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 3540143 00:16:06.875 19:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:06.875 19:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:06.875 19:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:06.875 19:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:06.875 19:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:06.875 19:31:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.875 19:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.875 19:31:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.785 19:31:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:08.785 00:16:08.785 real 0m46.280s 00:16:08.785 user 1m9.397s 00:16:08.785 sys 0m11.163s 00:16:08.785 19:31:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:08.785 19:31:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:08.785 ************************************ 00:16:08.785 END TEST nvmf_lvs_grow 00:16:08.785 ************************************ 00:16:09.047 19:31:34 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:09.047 19:31:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:09.047 19:31:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:09.047 19:31:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:09.047 ************************************ 00:16:09.047 START TEST nvmf_bdev_io_wait 00:16:09.047 ************************************ 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:09.047 * Looking for test storage... 00:16:09.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:16:09.047 19:31:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:17.193 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:17.193 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:17.193 Found net devices under 0000:31:00.0: cvl_0_0 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:17.193 Found net devices under 0000:31:00.1: cvl_0_1 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:17.193 19:31:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:17.193 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.193 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:17.193 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:17.193 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:17.193 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:17.193 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:17.193 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:17.193 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:17.193 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:17.193 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:17.193 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:17.193 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:17.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:16:17.193 00:16:17.193 --- 10.0.0.2 ping statistics --- 00:16:17.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.193 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:16:17.193 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:17.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:16:17.193 00:16:17.193 --- 10.0.0.1 ping statistics --- 00:16:17.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.193 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:16:17.193 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.193 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:16:17.193 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:17.193 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.194 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:17.194 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:17.194 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.194 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:17.194 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:17.454 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:17.454 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:17.454 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:17.454 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:17.454 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3545817 00:16:17.454 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3545817 00:16:17.454 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:17.454 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 3545817 ']' 00:16:17.454 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.454 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:17.454 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.454 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:17.454 19:31:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:17.454 [2024-05-15 19:31:43.442415] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:16:17.454 [2024-05-15 19:31:43.442481] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.454 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.454 [2024-05-15 19:31:43.540178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:17.454 [2024-05-15 19:31:43.638012] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.454 [2024-05-15 19:31:43.638077] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.454 [2024-05-15 19:31:43.638086] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.454 [2024-05-15 19:31:43.638095] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.454 [2024-05-15 19:31:43.638103] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.454 [2024-05-15 19:31:43.638237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.454 [2024-05-15 19:31:43.638386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.454 [2024-05-15 19:31:43.638445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.454 [2024-05-15 19:31:43.638446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.396 [2024-05-15 19:31:44.426482] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.396 Malloc0 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.396 [2024-05-15 19:31:44.491443] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:18.396 [2024-05-15 19:31:44.491689] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3545914 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3545916 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:18.396 { 00:16:18.396 "params": { 00:16:18.396 "name": "Nvme$subsystem", 00:16:18.396 "trtype": "$TEST_TRANSPORT", 00:16:18.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.396 "adrfam": "ipv4", 00:16:18.396 "trsvcid": "$NVMF_PORT", 00:16:18.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.396 "hdgst": ${hdgst:-false}, 00:16:18.396 "ddgst": ${ddgst:-false} 00:16:18.396 }, 00:16:18.396 "method": "bdev_nvme_attach_controller" 00:16:18.396 } 00:16:18.396 EOF 00:16:18.396 )") 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3545918 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:18.396 { 00:16:18.396 "params": { 00:16:18.396 "name": "Nvme$subsystem", 00:16:18.396 "trtype": "$TEST_TRANSPORT", 00:16:18.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.396 "adrfam": "ipv4", 00:16:18.396 "trsvcid": "$NVMF_PORT", 00:16:18.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.396 "hdgst": ${hdgst:-false}, 00:16:18.396 "ddgst": ${ddgst:-false} 00:16:18.396 }, 00:16:18.396 "method": "bdev_nvme_attach_controller" 00:16:18.396 } 00:16:18.396 EOF 00:16:18.396 )") 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3545921 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:18.396 { 00:16:18.396 "params": { 00:16:18.396 "name": "Nvme$subsystem", 00:16:18.396 "trtype": "$TEST_TRANSPORT", 00:16:18.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.396 "adrfam": "ipv4", 00:16:18.396 "trsvcid": "$NVMF_PORT", 00:16:18.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.396 "hdgst": ${hdgst:-false}, 00:16:18.396 "ddgst": ${ddgst:-false} 00:16:18.396 }, 00:16:18.396 "method": "bdev_nvme_attach_controller" 00:16:18.396 } 00:16:18.396 EOF 00:16:18.396 )") 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:18.396 { 00:16:18.396 "params": { 00:16:18.396 "name": "Nvme$subsystem", 00:16:18.396 "trtype": "$TEST_TRANSPORT", 00:16:18.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.396 "adrfam": "ipv4", 00:16:18.396 "trsvcid": "$NVMF_PORT", 00:16:18.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.396 "hdgst": ${hdgst:-false}, 00:16:18.396 "ddgst": ${ddgst:-false} 00:16:18.396 }, 00:16:18.396 "method": "bdev_nvme_attach_controller" 00:16:18.396 } 00:16:18.396 EOF 00:16:18.396 )") 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3545914 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:18.396 "params": { 00:16:18.396 "name": "Nvme1", 00:16:18.396 "trtype": "tcp", 00:16:18.396 "traddr": "10.0.0.2", 00:16:18.396 "adrfam": "ipv4", 00:16:18.396 "trsvcid": "4420", 00:16:18.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:18.396 "hdgst": false, 00:16:18.396 "ddgst": false 00:16:18.396 }, 00:16:18.396 "method": "bdev_nvme_attach_controller" 00:16:18.396 }' 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:18.396 "params": { 00:16:18.396 "name": "Nvme1", 00:16:18.396 "trtype": "tcp", 00:16:18.396 "traddr": "10.0.0.2", 00:16:18.396 "adrfam": "ipv4", 00:16:18.396 "trsvcid": "4420", 00:16:18.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:18.396 "hdgst": false, 00:16:18.396 "ddgst": false 00:16:18.396 }, 00:16:18.396 "method": "bdev_nvme_attach_controller" 00:16:18.396 }' 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:18.396 "params": { 00:16:18.396 "name": "Nvme1", 00:16:18.396 "trtype": "tcp", 00:16:18.396 "traddr": "10.0.0.2", 00:16:18.396 "adrfam": "ipv4", 00:16:18.396 "trsvcid": "4420", 00:16:18.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:18.396 "hdgst": false, 00:16:18.396 "ddgst": false 00:16:18.396 }, 00:16:18.396 "method": "bdev_nvme_attach_controller" 00:16:18.396 }' 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:18.396 19:31:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:18.396 "params": { 00:16:18.396 "name": "Nvme1", 00:16:18.397 "trtype": "tcp", 00:16:18.397 "traddr": "10.0.0.2", 00:16:18.397 "adrfam": "ipv4", 00:16:18.397 "trsvcid": "4420", 00:16:18.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:18.397 "hdgst": false, 00:16:18.397 "ddgst": false 00:16:18.397 }, 00:16:18.397 "method": "bdev_nvme_attach_controller" 00:16:18.397 }' 00:16:18.397 [2024-05-15 19:31:44.543840] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:16:18.397 [2024-05-15 19:31:44.543889] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:18.397 [2024-05-15 19:31:44.544165] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:16:18.397 [2024-05-15 19:31:44.544209] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:18.397 [2024-05-15 19:31:44.544876] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:16:18.397 [2024-05-15 19:31:44.544917] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:18.397 [2024-05-15 19:31:44.547059] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:16:18.397 [2024-05-15 19:31:44.547108] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:18.657 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.657 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.657 [2024-05-15 19:31:44.701544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.657 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.657 [2024-05-15 19:31:44.752462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:18.657 [2024-05-15 19:31:44.759666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.657 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.657 [2024-05-15 19:31:44.809845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:18.657 [2024-05-15 19:31:44.818305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.918 [2024-05-15 19:31:44.868070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.918 [2024-05-15 19:31:44.870233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:16:18.918 [2024-05-15 19:31:44.917070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:18.918 Running I/O for 1 seconds... 00:16:18.918 Running I/O for 1 seconds... 00:16:18.918 Running I/O for 1 seconds... 00:16:18.918 Running I/O for 1 seconds... 00:16:19.861 00:16:19.861 Latency(us) 00:16:19.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.861 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:19.861 Nvme1n1 : 1.01 11364.21 44.39 0.00 0.00 11218.20 7208.96 15510.19 00:16:19.861 =================================================================================================================== 00:16:19.861 Total : 11364.21 44.39 0.00 0.00 11218.20 7208.96 15510.19 00:16:19.861 00:16:19.861 Latency(us) 00:16:19.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.861 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:19.861 Nvme1n1 : 1.01 12790.11 49.96 0.00 0.00 9976.71 5215.57 20534.61 00:16:19.861 =================================================================================================================== 00:16:19.861 Total : 12790.11 49.96 0.00 0.00 9976.71 5215.57 20534.61 00:16:20.121 00:16:20.121 Latency(us) 00:16:20.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.121 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:20.121 Nvme1n1 : 1.01 11882.22 46.41 0.00 0.00 10738.46 5079.04 23483.73 00:16:20.121 =================================================================================================================== 00:16:20.121 Total : 11882.22 46.41 0.00 0.00 10738.46 5079.04 23483.73 00:16:20.121 00:16:20.121 Latency(us) 00:16:20.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.121 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:20.121 Nvme1n1 : 1.00 167156.83 652.96 0.00 0.00 762.59 273.07 894.29 00:16:20.121 =================================================================================================================== 00:16:20.121 Total : 167156.83 652.96 0.00 0.00 762.59 273.07 894.29 00:16:20.121 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3545916 00:16:20.121 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3545918 00:16:20.121 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3545921 00:16:20.121 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:20.121 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.121 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:20.121 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.121 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:20.121 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:20.121 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:20.121 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:16:20.121 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:20.121 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:16:20.121 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:20.121 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:20.381 rmmod nvme_tcp 00:16:20.381 rmmod nvme_fabrics 00:16:20.381 rmmod nvme_keyring 00:16:20.381 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:20.381 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:16:20.381 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:16:20.381 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3545817 ']' 00:16:20.381 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3545817 00:16:20.381 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 3545817 ']' 00:16:20.381 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 3545817 00:16:20.381 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:16:20.382 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:20.382 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3545817 00:16:20.382 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:20.382 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:20.382 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3545817' 00:16:20.382 killing process with pid 3545817 00:16:20.382 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 3545817 00:16:20.382 [2024-05-15 19:31:46.416969] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:20.382 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 3545817 00:16:20.382 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:20.382 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:20.382 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:20.382 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:20.382 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:20.382 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.382 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.382 19:31:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.950 19:31:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:22.950 00:16:22.950 real 0m13.587s 00:16:22.950 user 0m19.274s 00:16:22.950 sys 0m7.561s 00:16:22.950 19:31:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:22.950 19:31:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:22.950 ************************************ 00:16:22.950 END TEST nvmf_bdev_io_wait 00:16:22.950 ************************************ 00:16:22.950 19:31:48 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:22.950 19:31:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:22.950 19:31:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:22.950 19:31:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:22.950 ************************************ 00:16:22.950 START TEST nvmf_queue_depth 00:16:22.950 ************************************ 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:22.950 * Looking for test storage... 00:16:22.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:16:22.950 19:31:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:31.094 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:31.094 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:31.094 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:31.095 Found net devices under 0000:31:00.0: cvl_0_0 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:31.095 Found net devices under 0000:31:00.1: cvl_0_1 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:31.095 19:31:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:31.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:16:31.095 00:16:31.095 --- 10.0.0.2 ping statistics --- 00:16:31.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.095 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:31.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:16:31.095 00:16:31.095 --- 10.0.0.1 ping statistics --- 00:16:31.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.095 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3551039 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3551039 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3551039 ']' 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:31.095 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:31.095 [2024-05-15 19:31:57.158302] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:16:31.095 [2024-05-15 19:31:57.158361] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.095 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.095 [2024-05-15 19:31:57.223587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.356 [2024-05-15 19:31:57.288638] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.356 [2024-05-15 19:31:57.288669] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.356 [2024-05-15 19:31:57.288676] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.356 [2024-05-15 19:31:57.288683] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.356 [2024-05-15 19:31:57.288688] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.356 [2024-05-15 19:31:57.288706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:31.356 [2024-05-15 19:31:57.414043] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:31.356 Malloc0 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.356 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:31.357 [2024-05-15 19:31:57.487493] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:31.357 [2024-05-15 19:31:57.487712] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.357 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.357 19:31:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3551229 00:16:31.357 19:31:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:31.357 19:31:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:31.357 19:31:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3551229 /var/tmp/bdevperf.sock 00:16:31.357 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3551229 ']' 00:16:31.357 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:31.357 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:31.357 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:31.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:31.357 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:31.357 19:31:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:31.357 [2024-05-15 19:31:57.538093] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:16:31.357 [2024-05-15 19:31:57.538139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3551229 ] 00:16:31.618 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.618 [2024-05-15 19:31:57.621100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.618 [2024-05-15 19:31:57.685821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.559 19:31:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:32.559 19:31:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:16:32.559 19:31:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:32.559 19:31:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.559 19:31:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:32.559 NVMe0n1 00:16:32.559 19:31:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.559 19:31:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:32.559 Running I/O for 10 seconds... 00:16:42.644 00:16:42.644 Latency(us) 00:16:42.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.644 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:42.644 Verification LBA range: start 0x0 length 0x4000 00:16:42.644 NVMe0n1 : 10.06 9383.82 36.66 0.00 0.00 108640.92 11195.73 76021.76 00:16:42.644 =================================================================================================================== 00:16:42.644 Total : 9383.82 36.66 0.00 0.00 108640.92 11195.73 76021.76 00:16:42.644 0 00:16:42.644 19:32:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3551229 00:16:42.644 19:32:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3551229 ']' 00:16:42.644 19:32:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3551229 00:16:42.644 19:32:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:16:42.644 19:32:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:42.644 19:32:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3551229 00:16:42.905 19:32:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:42.905 19:32:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:42.905 19:32:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3551229' 00:16:42.905 killing process with pid 3551229 00:16:42.905 19:32:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3551229 00:16:42.905 Received shutdown signal, test time was about 10.000000 seconds 00:16:42.905 00:16:42.905 Latency(us) 00:16:42.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.905 =================================================================================================================== 00:16:42.905 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:42.905 19:32:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3551229 00:16:42.905 19:32:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:42.905 19:32:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:42.905 19:32:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:42.905 19:32:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:42.905 19:32:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:42.905 19:32:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:42.905 19:32:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:42.905 19:32:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:42.905 rmmod nvme_tcp 00:16:42.905 rmmod nvme_fabrics 00:16:42.905 rmmod nvme_keyring 00:16:42.905 19:32:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:42.905 19:32:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:42.905 19:32:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:42.905 19:32:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3551039 ']' 00:16:42.905 19:32:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3551039 00:16:42.905 19:32:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3551039 ']' 00:16:42.905 19:32:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3551039 00:16:42.905 19:32:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:16:42.905 19:32:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:42.905 19:32:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3551039 00:16:43.165 19:32:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:43.165 19:32:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:43.165 19:32:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3551039' 00:16:43.165 killing process with pid 3551039 00:16:43.165 19:32:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3551039 00:16:43.165 [2024-05-15 19:32:09.115021] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:43.165 19:32:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3551039 00:16:43.165 19:32:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:43.165 19:32:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:43.165 19:32:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:43.165 19:32:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.165 19:32:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:43.165 19:32:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.165 19:32:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.165 19:32:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.711 19:32:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:45.711 00:16:45.711 real 0m22.618s 00:16:45.711 user 0m25.595s 00:16:45.711 sys 0m7.276s 00:16:45.711 19:32:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:45.711 19:32:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:45.711 ************************************ 00:16:45.711 END TEST nvmf_queue_depth 00:16:45.711 ************************************ 00:16:45.711 19:32:11 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:45.711 19:32:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:45.711 19:32:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:45.711 19:32:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:45.711 ************************************ 00:16:45.711 START TEST nvmf_target_multipath 00:16:45.711 ************************************ 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:45.711 * Looking for test storage... 00:16:45.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:16:45.711 19:32:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.850 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:53.851 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:53.851 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:53.851 Found net devices under 0000:31:00.0: cvl_0_0 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:53.851 Found net devices under 0000:31:00.1: cvl_0_1 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:53.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:53.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:16:53.851 00:16:53.851 --- 10.0.0.2 ping statistics --- 00:16:53.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.851 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:53.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:53.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:16:53.851 00:16:53.851 --- 10.0.0.1 ping statistics --- 00:16:53.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.851 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:53.851 only one NIC for nvmf test 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:53.851 rmmod nvme_tcp 00:16:53.851 rmmod nvme_fabrics 00:16:53.851 rmmod nvme_keyring 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.851 19:32:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:56.396 00:16:56.396 real 0m10.675s 00:16:56.396 user 0m2.249s 00:16:56.396 sys 0m6.316s 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:56.396 19:32:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:56.396 ************************************ 00:16:56.396 END TEST nvmf_target_multipath 00:16:56.396 ************************************ 00:16:56.396 19:32:22 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:56.396 19:32:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:56.396 19:32:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:56.396 19:32:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:56.396 ************************************ 00:16:56.396 START TEST nvmf_zcopy 00:16:56.396 ************************************ 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:56.396 * Looking for test storage... 00:16:56.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.396 19:32:22 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:56.397 19:32:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:04.576 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.576 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:04.577 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:04.577 Found net devices under 0000:31:00.0: cvl_0_0 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:04.577 Found net devices under 0000:31:00.1: cvl_0_1 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:04.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.737 ms 00:17:04.577 00:17:04.577 --- 10.0.0.2 ping statistics --- 00:17:04.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.577 rtt min/avg/max/mdev = 0.737/0.737/0.737/0.000 ms 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:04.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:17:04.577 00:17:04.577 --- 10.0.0.1 ping statistics --- 00:17:04.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.577 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3562687 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3562687 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 3562687 ']' 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:04.577 19:32:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:04.577 [2024-05-15 19:32:30.713519] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:17:04.577 [2024-05-15 19:32:30.713590] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.577 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.838 [2024-05-15 19:32:30.795261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.838 [2024-05-15 19:32:30.870388] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.838 [2024-05-15 19:32:30.870428] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.838 [2024-05-15 19:32:30.870436] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.838 [2024-05-15 19:32:30.870443] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.838 [2024-05-15 19:32:30.870449] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.838 [2024-05-15 19:32:30.870475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.408 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:05.408 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:17:05.408 19:32:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:05.408 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:05.408 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:05.668 19:32:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.668 19:32:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:05.668 19:32:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:05.668 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.668 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:05.668 [2024-05-15 19:32:31.617894] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.668 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.668 19:32:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:05.668 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.668 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:05.668 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.668 19:32:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.668 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.668 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:05.668 [2024-05-15 19:32:31.633874] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:05.668 [2024-05-15 19:32:31.634071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.668 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.668 19:32:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:05.668 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.669 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:05.669 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.669 19:32:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:05.669 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.669 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:05.669 malloc0 00:17:05.669 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.669 19:32:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:05.669 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.669 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:05.669 19:32:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.669 19:32:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:05.669 19:32:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:05.669 19:32:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:05.669 19:32:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:05.669 19:32:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:05.669 19:32:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:05.669 { 00:17:05.669 "params": { 00:17:05.669 "name": "Nvme$subsystem", 00:17:05.669 "trtype": "$TEST_TRANSPORT", 00:17:05.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:05.669 "adrfam": "ipv4", 00:17:05.669 "trsvcid": "$NVMF_PORT", 00:17:05.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:05.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:05.669 "hdgst": ${hdgst:-false}, 00:17:05.669 "ddgst": ${ddgst:-false} 00:17:05.669 }, 00:17:05.669 "method": "bdev_nvme_attach_controller" 00:17:05.669 } 00:17:05.669 EOF 00:17:05.669 )") 00:17:05.669 19:32:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:05.669 19:32:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:05.669 19:32:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:05.669 19:32:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:05.669 "params": { 00:17:05.669 "name": "Nvme1", 00:17:05.669 "trtype": "tcp", 00:17:05.669 "traddr": "10.0.0.2", 00:17:05.669 "adrfam": "ipv4", 00:17:05.669 "trsvcid": "4420", 00:17:05.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.669 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:05.669 "hdgst": false, 00:17:05.669 "ddgst": false 00:17:05.669 }, 00:17:05.669 "method": "bdev_nvme_attach_controller" 00:17:05.669 }' 00:17:05.669 [2024-05-15 19:32:31.713702] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:17:05.669 [2024-05-15 19:32:31.713750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3563020 ] 00:17:05.669 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.669 [2024-05-15 19:32:31.795310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.929 [2024-05-15 19:32:31.859660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.189 Running I/O for 10 seconds... 00:17:16.183 00:17:16.183 Latency(us) 00:17:16.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.183 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:16.183 Verification LBA range: start 0x0 length 0x1000 00:17:16.183 Nvme1n1 : 10.01 6857.54 53.57 0.00 0.00 18608.45 3358.72 28617.39 00:17:16.183 =================================================================================================================== 00:17:16.183 Total : 6857.54 53.57 0.00 0.00 18608.45 3358.72 28617.39 00:17:16.183 19:32:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3565025 00:17:16.183 19:32:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:17:16.183 19:32:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:16.183 19:32:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:16.183 19:32:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:16.183 19:32:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:16.183 19:32:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:16.183 19:32:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:16.183 19:32:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:16.183 { 00:17:16.183 "params": { 00:17:16.183 "name": "Nvme$subsystem", 00:17:16.183 "trtype": "$TEST_TRANSPORT", 00:17:16.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:16.183 "adrfam": "ipv4", 00:17:16.183 "trsvcid": "$NVMF_PORT", 00:17:16.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:16.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:16.183 "hdgst": ${hdgst:-false}, 00:17:16.183 "ddgst": ${ddgst:-false} 00:17:16.183 }, 00:17:16.183 "method": "bdev_nvme_attach_controller" 00:17:16.183 } 00:17:16.183 EOF 00:17:16.183 )") 00:17:16.183 [2024-05-15 19:32:42.338630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.183 [2024-05-15 19:32:42.338665] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.183 19:32:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:16.183 19:32:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:16.183 [2024-05-15 19:32:42.346614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.183 [2024-05-15 19:32:42.346627] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.183 19:32:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:16.183 19:32:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:16.183 "params": { 00:17:16.183 "name": "Nvme1", 00:17:16.183 "trtype": "tcp", 00:17:16.183 "traddr": "10.0.0.2", 00:17:16.183 "adrfam": "ipv4", 00:17:16.183 "trsvcid": "4420", 00:17:16.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.183 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:16.183 "hdgst": false, 00:17:16.183 "ddgst": false 00:17:16.183 }, 00:17:16.183 "method": "bdev_nvme_attach_controller" 00:17:16.183 }' 00:17:16.183 [2024-05-15 19:32:42.354634] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.183 [2024-05-15 19:32:42.354645] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.183 [2024-05-15 19:32:42.362656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.183 [2024-05-15 19:32:42.362666] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.444 [2024-05-15 19:32:42.370679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.444 [2024-05-15 19:32:42.370689] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.444 [2024-05-15 19:32:42.378699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.444 [2024-05-15 19:32:42.378709] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.444 [2024-05-15 19:32:42.381850] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:17:16.444 [2024-05-15 19:32:42.381898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3565025 ] 00:17:16.444 [2024-05-15 19:32:42.386718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.444 [2024-05-15 19:32:42.386729] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.444 [2024-05-15 19:32:42.394738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.444 [2024-05-15 19:32:42.394748] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.444 [2024-05-15 19:32:42.402760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.444 [2024-05-15 19:32:42.402769] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.444 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.445 [2024-05-15 19:32:42.410783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.410793] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.418803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.418812] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.426826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.426837] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.434848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.434858] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.442868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.442879] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.450888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.450898] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.458909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.458919] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.463032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.445 [2024-05-15 19:32:42.466933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.466943] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.474955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.474966] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.482977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.482988] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.490997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.491008] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.499021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.499035] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.507041] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.507052] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.515063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.515073] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.523084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.523094] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.527019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.445 [2024-05-15 19:32:42.531106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.531116] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.539130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.539143] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.547155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.547169] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.559187] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.559198] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.567206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.567216] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.575230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.575240] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.583251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.583260] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.591273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.591283] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.599299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.599318] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.607368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.607383] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.615344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.615356] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.445 [2024-05-15 19:32:42.623366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.445 [2024-05-15 19:32:42.623379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.631386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.631398] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.639408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.639418] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.647431] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.647440] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.655454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.655463] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.663475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.663485] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.671499] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.671512] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.679521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.679533] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.687540] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.687550] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.695563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.695573] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.703586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.703596] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.711608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.711618] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.719630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.719640] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.727653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.727667] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.735678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.735691] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.743699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.743709] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.751720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.751731] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.759741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.759752] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.767765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.767776] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.775785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.775797] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.783806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.783816] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.791829] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.791839] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.799849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.799859] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.807870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.807880] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.815891] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.815902] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.823929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.823948] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.831938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.831949] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 Running I/O for 5 seconds... 00:17:16.706 [2024-05-15 19:32:42.839958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.839971] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.851437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.851457] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.860933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.860952] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.870646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.870664] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.880471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.880489] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.706 [2024-05-15 19:32:42.890326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.706 [2024-05-15 19:32:42.890344] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:42.899758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:42.899776] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:42.911271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:42.911291] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:42.919909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:42.919927] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:42.929970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:42.929989] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:42.939131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:42.939150] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:42.949083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:42.949101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:42.960549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:42.960567] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:42.969293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:42.969311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:42.979269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:42.979287] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:42.988814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:42.988832] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:42.998299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:42.998327] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:43.007707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:43.007726] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:43.017044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:43.017062] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:43.026406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:43.026425] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:43.035797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:43.035815] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:43.045369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:43.045387] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:43.054631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:43.054649] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:43.064082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:43.064101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:43.073664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:43.073683] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:43.083102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:43.083125] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:43.092489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:43.092507] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:43.102265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:43.102283] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:43.111623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:43.111641] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:43.121342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:43.121360] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:43.130787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:43.130805] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:43.141899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:43.141918] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.966 [2024-05-15 19:32:43.149935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.966 [2024-05-15 19:32:43.149953] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.161546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.161564] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.170014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.170033] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.179979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.179998] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.189173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.189192] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.198112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.198131] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.207860] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.207878] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.217537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.217555] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.226586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.226605] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.236391] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.236410] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.246359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.246377] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.257573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.257591] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.265756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.265778] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.277394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.277413] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.286059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.286077] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.295645] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.295663] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.305320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.305338] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.314837] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.314855] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.324147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.324166] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.334145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.334163] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.345016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.345035] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.353358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.353376] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.365137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.365156] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.373833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.373851] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.384995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.385015] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.393390] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.393408] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.227 [2024-05-15 19:32:43.403073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.227 [2024-05-15 19:32:43.403091] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.488 [2024-05-15 19:32:43.412533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.488 [2024-05-15 19:32:43.412551] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.488 [2024-05-15 19:32:43.422078] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.488 [2024-05-15 19:32:43.422096] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.488 [2024-05-15 19:32:43.431362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.488 [2024-05-15 19:32:43.431380] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.488 [2024-05-15 19:32:43.440825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.488 [2024-05-15 19:32:43.440843] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.450338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.450360] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.459815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.459833] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.469361] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.469380] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.478747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.478766] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.487680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.487698] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.497628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.497646] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.510028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.510046] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.518149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.518166] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.529906] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.529924] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.538505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.538523] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.548221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.548239] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.557641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.557659] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.567125] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.567143] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.576698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.576716] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.586291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.586308] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.595872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.595890] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.605600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.605618] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.615188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.615206] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.624757] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.624776] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.634403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.634425] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.644015] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.644033] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.653002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.653021] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.662620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.662638] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.489 [2024-05-15 19:32:43.672190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.489 [2024-05-15 19:32:43.672208] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.681630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.681649] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.691074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.691092] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.700587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.700605] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.709953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.709971] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.719407] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.719425] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.729038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.729056] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.738825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.738843] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.748357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.748375] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.757577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.757596] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.766995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.767013] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.776238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.776256] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.785642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.785659] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.795024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.795042] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.804536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.804553] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.813638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.813656] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.823552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.823570] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.835157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.835175] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.845448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.845467] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.855705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.855723] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.863906] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.863924] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.875878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.875896] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.886794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.886813] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.895272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.895290] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.905055] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.905073] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.914317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.914337] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.923800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.923819] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.750 [2024-05-15 19:32:43.933180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.750 [2024-05-15 19:32:43.933198] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.011 [2024-05-15 19:32:43.942623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.011 [2024-05-15 19:32:43.942641] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.011 [2024-05-15 19:32:43.951887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.011 [2024-05-15 19:32:43.951905] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.011 [2024-05-15 19:32:43.961419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.011 [2024-05-15 19:32:43.961436] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.011 [2024-05-15 19:32:43.970714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.011 [2024-05-15 19:32:43.970731] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.011 [2024-05-15 19:32:43.980215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.011 [2024-05-15 19:32:43.980232] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.011 [2024-05-15 19:32:43.989320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.011 [2024-05-15 19:32:43.989338] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.011 [2024-05-15 19:32:43.999216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.011 [2024-05-15 19:32:43.999234] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.011 [2024-05-15 19:32:44.008439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.011 [2024-05-15 19:32:44.008457] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.011 [2024-05-15 19:32:44.019815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.011 [2024-05-15 19:32:44.019834] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.011 [2024-05-15 19:32:44.028357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.011 [2024-05-15 19:32:44.028375] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.011 [2024-05-15 19:32:44.038068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.011 [2024-05-15 19:32:44.038086] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.011 [2024-05-15 19:32:44.047630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.011 [2024-05-15 19:32:44.047649] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.011 [2024-05-15 19:32:44.057163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.011 [2024-05-15 19:32:44.057181] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.011 [2024-05-15 19:32:44.066533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.011 [2024-05-15 19:32:44.066551] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.011 [2024-05-15 19:32:44.075933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.011 [2024-05-15 19:32:44.075950] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.011 [2024-05-15 19:32:44.085349] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.011 [2024-05-15 19:32:44.085366] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.011 [2024-05-15 19:32:44.094910] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.011 [2024-05-15 19:32:44.094928] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.011 [2024-05-15 19:32:44.104208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.011 [2024-05-15 19:32:44.104225] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.011 [2024-05-15 19:32:44.113597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.011 [2024-05-15 19:32:44.113615] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.011 [2024-05-15 19:32:44.123110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.011 [2024-05-15 19:32:44.123128] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.011 [2024-05-15 19:32:44.132531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.011 [2024-05-15 19:32:44.132549] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.012 [2024-05-15 19:32:44.141987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.012 [2024-05-15 19:32:44.142005] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.012 [2024-05-15 19:32:44.151331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.012 [2024-05-15 19:32:44.151349] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.012 [2024-05-15 19:32:44.160938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.012 [2024-05-15 19:32:44.160956] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.012 [2024-05-15 19:32:44.170460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.012 [2024-05-15 19:32:44.170478] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.012 [2024-05-15 19:32:44.180063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.012 [2024-05-15 19:32:44.180081] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.012 [2024-05-15 19:32:44.189528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.012 [2024-05-15 19:32:44.189546] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.272 [2024-05-15 19:32:44.199063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.272 [2024-05-15 19:32:44.199081] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.272 [2024-05-15 19:32:44.208356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.272 [2024-05-15 19:32:44.208374] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.272 [2024-05-15 19:32:44.217616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.272 [2024-05-15 19:32:44.217634] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.272 [2024-05-15 19:32:44.226939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.272 [2024-05-15 19:32:44.226957] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.272 [2024-05-15 19:32:44.236573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.272 [2024-05-15 19:32:44.236592] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.272 [2024-05-15 19:32:44.246121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.272 [2024-05-15 19:32:44.246139] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.272 [2024-05-15 19:32:44.255505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.272 [2024-05-15 19:32:44.255525] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.272 [2024-05-15 19:32:44.264691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.272 [2024-05-15 19:32:44.264708] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.272 [2024-05-15 19:32:44.274080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.272 [2024-05-15 19:32:44.274098] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.272 [2024-05-15 19:32:44.283408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.272 [2024-05-15 19:32:44.283426] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.272 [2024-05-15 19:32:44.293084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.272 [2024-05-15 19:32:44.293102] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.272 [2024-05-15 19:32:44.302111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.272 [2024-05-15 19:32:44.302129] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.272 [2024-05-15 19:32:44.311856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.273 [2024-05-15 19:32:44.311874] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.273 [2024-05-15 19:32:44.321421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.273 [2024-05-15 19:32:44.321439] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.273 [2024-05-15 19:32:44.332581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.273 [2024-05-15 19:32:44.332599] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.273 [2024-05-15 19:32:44.340981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.273 [2024-05-15 19:32:44.340998] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.273 [2024-05-15 19:32:44.350892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.273 [2024-05-15 19:32:44.350910] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.273 [2024-05-15 19:32:44.359519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.273 [2024-05-15 19:32:44.359537] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.273 [2024-05-15 19:32:44.369396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.273 [2024-05-15 19:32:44.369414] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.273 [2024-05-15 19:32:44.378638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.273 [2024-05-15 19:32:44.378657] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.273 [2024-05-15 19:32:44.387870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.273 [2024-05-15 19:32:44.387888] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.273 [2024-05-15 19:32:44.396923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.273 [2024-05-15 19:32:44.396943] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.273 [2024-05-15 19:32:44.406862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.273 [2024-05-15 19:32:44.406879] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.273 [2024-05-15 19:32:44.418417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.273 [2024-05-15 19:32:44.418436] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.273 [2024-05-15 19:32:44.426821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.273 [2024-05-15 19:32:44.426839] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.273 [2024-05-15 19:32:44.436415] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.273 [2024-05-15 19:32:44.436433] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.273 [2024-05-15 19:32:44.445611] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.273 [2024-05-15 19:32:44.445629] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.273 [2024-05-15 19:32:44.455006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.273 [2024-05-15 19:32:44.455025] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.533 [2024-05-15 19:32:44.464489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.533 [2024-05-15 19:32:44.464509] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.533 [2024-05-15 19:32:44.473777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.533 [2024-05-15 19:32:44.473795] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.533 [2024-05-15 19:32:44.483260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.533 [2024-05-15 19:32:44.483279] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.533 [2024-05-15 19:32:44.492706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.533 [2024-05-15 19:32:44.492724] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.533 [2024-05-15 19:32:44.501952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.533 [2024-05-15 19:32:44.501970] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.533 [2024-05-15 19:32:44.511636] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.533 [2024-05-15 19:32:44.511655] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.533 [2024-05-15 19:32:44.521030] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.533 [2024-05-15 19:32:44.521049] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.533 [2024-05-15 19:32:44.530751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.533 [2024-05-15 19:32:44.530774] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.534 [2024-05-15 19:32:44.540402] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.534 [2024-05-15 19:32:44.540420] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.534 [2024-05-15 19:32:44.549716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.534 [2024-05-15 19:32:44.549735] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.534 [2024-05-15 19:32:44.559507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.534 [2024-05-15 19:32:44.559525] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.534 [2024-05-15 19:32:44.568925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.534 [2024-05-15 19:32:44.568943] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.534 [2024-05-15 19:32:44.578310] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.534 [2024-05-15 19:32:44.578334] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.534 [2024-05-15 19:32:44.587887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.534 [2024-05-15 19:32:44.587905] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.534 [2024-05-15 19:32:44.597375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.534 [2024-05-15 19:32:44.597393] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.534 [2024-05-15 19:32:44.606759] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.534 [2024-05-15 19:32:44.606777] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.534 [2024-05-15 19:32:44.616295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.534 [2024-05-15 19:32:44.616319] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.534 [2024-05-15 19:32:44.625507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.534 [2024-05-15 19:32:44.625526] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.534 [2024-05-15 19:32:44.635224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.534 [2024-05-15 19:32:44.635243] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.534 [2024-05-15 19:32:44.644397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.534 [2024-05-15 19:32:44.644416] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.534 [2024-05-15 19:32:44.653841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.534 [2024-05-15 19:32:44.653860] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.534 [2024-05-15 19:32:44.663390] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.534 [2024-05-15 19:32:44.663408] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.534 [2024-05-15 19:32:44.672862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.534 [2024-05-15 19:32:44.672880] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.534 [2024-05-15 19:32:44.682281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.534 [2024-05-15 19:32:44.682299] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.534 [2024-05-15 19:32:44.691696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.534 [2024-05-15 19:32:44.691714] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.534 [2024-05-15 19:32:44.701086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.534 [2024-05-15 19:32:44.701104] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.534 [2024-05-15 19:32:44.709974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.534 [2024-05-15 19:32:44.709996] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.794 [2024-05-15 19:32:44.719881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.794 [2024-05-15 19:32:44.719899] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.794 [2024-05-15 19:32:44.731532] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.794 [2024-05-15 19:32:44.731551] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.794 [2024-05-15 19:32:44.739974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.794 [2024-05-15 19:32:44.739992] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.794 [2024-05-15 19:32:44.749957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.794 [2024-05-15 19:32:44.749976] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.794 [2024-05-15 19:32:44.758758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.794 [2024-05-15 19:32:44.758777] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.794 [2024-05-15 19:32:44.768369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.794 [2024-05-15 19:32:44.768387] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.794 [2024-05-15 19:32:44.777916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.794 [2024-05-15 19:32:44.777934] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.794 [2024-05-15 19:32:44.787156] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.794 [2024-05-15 19:32:44.787174] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.794 [2024-05-15 19:32:44.796718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.794 [2024-05-15 19:32:44.796736] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.794 [2024-05-15 19:32:44.806052] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.794 [2024-05-15 19:32:44.806070] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.795 [2024-05-15 19:32:44.815827] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.795 [2024-05-15 19:32:44.815844] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.795 [2024-05-15 19:32:44.824932] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.795 [2024-05-15 19:32:44.824950] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.795 [2024-05-15 19:32:44.836186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.795 [2024-05-15 19:32:44.836205] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.795 [2024-05-15 19:32:44.844773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.795 [2024-05-15 19:32:44.844791] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.795 [2024-05-15 19:32:44.854744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.795 [2024-05-15 19:32:44.854762] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.795 [2024-05-15 19:32:44.863914] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.795 [2024-05-15 19:32:44.863932] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.795 [2024-05-15 19:32:44.873459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.795 [2024-05-15 19:32:44.873478] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.795 [2024-05-15 19:32:44.882438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.795 [2024-05-15 19:32:44.882456] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.795 [2024-05-15 19:32:44.892558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.795 [2024-05-15 19:32:44.892581] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.795 [2024-05-15 19:32:44.903775] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.795 [2024-05-15 19:32:44.903794] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.795 [2024-05-15 19:32:44.912109] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.795 [2024-05-15 19:32:44.912128] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.795 [2024-05-15 19:32:44.921805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.795 [2024-05-15 19:32:44.921823] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.795 [2024-05-15 19:32:44.931594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.795 [2024-05-15 19:32:44.931613] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.795 [2024-05-15 19:32:44.940895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.795 [2024-05-15 19:32:44.940913] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.795 [2024-05-15 19:32:44.950571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.795 [2024-05-15 19:32:44.950588] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.795 [2024-05-15 19:32:44.960294] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.795 [2024-05-15 19:32:44.960320] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.795 [2024-05-15 19:32:44.969690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.795 [2024-05-15 19:32:44.969709] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.055 [2024-05-15 19:32:44.979194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.055 [2024-05-15 19:32:44.979213] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.055 [2024-05-15 19:32:44.988579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:44.988597] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:44.997993] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:44.998011] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.007251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.007269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.016311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.016336] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.026288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.026305] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.037915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.037934] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.047697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.047715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.055832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.055851] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.067151] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.067169] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.076007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.076029] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.085224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.085242] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.094680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.094697] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.104002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.104020] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.113325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.113344] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.122880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.122898] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.132156] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.132174] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.141219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.141237] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.150794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.150812] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.160497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.160524] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.170032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.170050] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.179534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.179552] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.189005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.189022] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.198423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.198440] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.207504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.207521] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.217246] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.217263] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.226687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.226705] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.056 [2024-05-15 19:32:45.237843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.056 [2024-05-15 19:32:45.237861] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.245838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.245857] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.257359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.257377] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.265996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.266014] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.275743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.275761] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.285098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.285116] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.294303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.294325] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.303857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.303875] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.312573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.312590] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.322588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.322605] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.334044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.334063] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.342570] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.342588] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.352323] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.352341] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.361915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.361933] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.371281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.371300] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.380868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.380886] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.390619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.390636] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.399598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.399615] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.409324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.409342] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.418784] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.418802] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.428426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.428444] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.438031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.438049] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.447405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.447423] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.457269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.457288] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.466614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.466632] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.477756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.477774] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.485906] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.485924] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.317 [2024-05-15 19:32:45.497244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.317 [2024-05-15 19:32:45.497262] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.506003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.506022] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.515643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.515661] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.525007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.525025] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.534056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.534074] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.543842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.543859] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.556711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.556728] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.567869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.567887] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.576189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.576207] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.586103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.586121] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.595428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.595445] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.604662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.604680] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.614473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.614492] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.624132] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.624150] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.633628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.633645] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.643099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.643117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.652626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.652644] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.662036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.662054] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.671275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.671293] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.680630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.680648] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.689971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.689989] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.698946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.698964] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.708666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.708684] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.718156] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.718173] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.729385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.729403] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.737496] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.737514] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.749030] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.749048] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.578 [2024-05-15 19:32:45.757372] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.578 [2024-05-15 19:32:45.757390] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.767031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.767049] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.776565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.776583] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.785886] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.785905] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.795364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.795382] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.804705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.804724] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.814276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.814294] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.823820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.823838] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.833389] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.833406] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.842862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.842880] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.852198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.852216] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.861195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.861212] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.871132] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.871149] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.880024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.880043] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.891542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.891560] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.900730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.900747] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.911799] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.911818] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.920348] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.920366] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.930018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.930036] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.939587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.939605] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.948875] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.948893] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.958639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.958657] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.967957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.967975] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.979099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.979121] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.987575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.987593] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:45.998920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:45.998939] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:46.007516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:46.007533] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.840 [2024-05-15 19:32:46.017450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.840 [2024-05-15 19:32:46.017468] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.026898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.026916] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.036226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.036244] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.045323] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.045342] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.055198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.055216] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.066722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.066740] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.075079] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.075097] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.085244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.085261] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.094331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.094348] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.104173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.104191] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.113464] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.113481] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.124716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.124734] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.135017] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.135035] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.143260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.143279] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.154843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.154862] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.163345] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.163367] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.172852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.172870] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.182264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.182283] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.191634] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.191654] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.200299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.200323] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.210303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.210326] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.221827] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.221845] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.230494] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.230512] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.240442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.240460] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.249592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.249609] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.258758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.258776] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.267780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.267799] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.102 [2024-05-15 19:32:46.277684] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.102 [2024-05-15 19:32:46.277702] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.288685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.288703] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.297148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.297166] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.308658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.308677] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.317220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.317238] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.328975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.328994] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.337708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.337727] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.347311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.347339] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.356751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.356770] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.366424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.366442] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.376066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.376084] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.385576] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.385595] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.395029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.395048] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.404393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.404411] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.413975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.413993] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.423209] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.423228] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.432626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.432644] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.442219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.442237] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.451399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.451418] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.461108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.461126] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.470294] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.470317] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.479766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.479784] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.489354] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.489372] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.498929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.498947] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.508339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.508358] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.517864] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.517882] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.526885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.526907] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.363 [2024-05-15 19:32:46.536785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.363 [2024-05-15 19:32:46.536803] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.548031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.548050] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.556289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.556307] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.567674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.567692] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.576205] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.576222] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.596897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.596916] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.608104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.608122] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.616368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.616395] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.628150] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.628168] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.637182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.637200] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.648181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.648200] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.656385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.656402] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.668014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.668033] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.678266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.678284] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.686766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.686784] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.696592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.696611] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.706048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.706066] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.715368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.715386] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.724849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.724871] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.734302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.734326] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.743588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.743606] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.752791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.752809] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.761822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.761840] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.771797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.771815] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.783452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.783470] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.791828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.791845] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.624 [2024-05-15 19:32:46.801712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.624 [2024-05-15 19:32:46.801730] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:46.811118] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:46.811136] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:46.820595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:46.820614] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:46.829576] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:46.829594] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:46.839420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:46.839437] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:46.848244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:46.848262] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:46.859876] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:46.859895] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:46.868338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:46.868356] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:46.878061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:46.878079] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:46.887670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:46.887687] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:46.897000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:46.897019] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:46.906289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:46.906308] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:46.916189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:46.916206] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:46.927896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:46.927915] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:46.936519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:46.936537] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:46.946611] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:46.946630] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:46.956176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:46.956195] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:46.965523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:46.965541] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:46.974811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:46.974829] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:46.984433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:46.984450] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:46.993939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:46.993957] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:47.003218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:47.003236] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:47.012668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:47.012686] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:47.022174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:47.022192] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:47.032023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:47.032041] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:47.041400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:47.041419] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:47.052709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:47.052727] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.885 [2024-05-15 19:32:47.061039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.885 [2024-05-15 19:32:47.061057] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.070889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.070907] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.080379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.080397] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.089715] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.089734] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.099098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.099117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.108727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.108746] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.118066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.118084] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.127500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.127517] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.137090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.137107] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.146360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.146378] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.155804] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.155822] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.165170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.165188] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.174700] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.174718] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.184243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.184261] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.193527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.193546] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.203265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.203283] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.212857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.212875] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.224061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.224080] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.232560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.232578] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.242492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.242510] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.251473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.251491] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.261055] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.261073] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.270204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.270222] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.279752] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.279769] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.289139] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.289156] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.300437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.300454] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.308901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.308919] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.318906] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.318924] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.146 [2024-05-15 19:32:47.328266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.146 [2024-05-15 19:32:47.328285] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.406 [2024-05-15 19:32:47.337172] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.406 [2024-05-15 19:32:47.337190] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.406 [2024-05-15 19:32:47.347139] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.406 [2024-05-15 19:32:47.347156] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.406 [2024-05-15 19:32:47.358684] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.406 [2024-05-15 19:32:47.358702] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.406 [2024-05-15 19:32:47.367186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.406 [2024-05-15 19:32:47.367204] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.406 [2024-05-15 19:32:47.377239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.406 [2024-05-15 19:32:47.377257] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.406 [2024-05-15 19:32:47.386562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.406 [2024-05-15 19:32:47.386580] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.406 [2024-05-15 19:32:47.396285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.406 [2024-05-15 19:32:47.396303] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.406 [2024-05-15 19:32:47.405608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.406 [2024-05-15 19:32:47.405626] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.407 [2024-05-15 19:32:47.415170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.407 [2024-05-15 19:32:47.415187] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.407 [2024-05-15 19:32:47.424758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.407 [2024-05-15 19:32:47.424776] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.407 [2024-05-15 19:32:47.434203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.407 [2024-05-15 19:32:47.434221] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.407 [2024-05-15 19:32:47.443876] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.407 [2024-05-15 19:32:47.443897] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.407 [2024-05-15 19:32:47.453344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.407 [2024-05-15 19:32:47.453362] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.407 [2024-05-15 19:32:47.462351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.407 [2024-05-15 19:32:47.462369] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.407 [2024-05-15 19:32:47.472114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.407 [2024-05-15 19:32:47.472132] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.407 [2024-05-15 19:32:47.481605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.407 [2024-05-15 19:32:47.481623] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.407 [2024-05-15 19:32:47.492825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.407 [2024-05-15 19:32:47.492843] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.407 [2024-05-15 19:32:47.501227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.407 [2024-05-15 19:32:47.501245] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.407 [2024-05-15 19:32:47.511229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.407 [2024-05-15 19:32:47.511247] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.407 [2024-05-15 19:32:47.520448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.407 [2024-05-15 19:32:47.520466] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.407 [2024-05-15 19:32:47.529745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.407 [2024-05-15 19:32:47.529763] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.407 [2024-05-15 19:32:47.539169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.407 [2024-05-15 19:32:47.539187] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.407 [2024-05-15 19:32:47.548976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.407 [2024-05-15 19:32:47.548994] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.407 [2024-05-15 19:32:47.558459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.407 [2024-05-15 19:32:47.558477] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.407 [2024-05-15 19:32:47.568024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.407 [2024-05-15 19:32:47.568041] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.407 [2024-05-15 19:32:47.577541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.407 [2024-05-15 19:32:47.577559] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.407 [2024-05-15 19:32:47.586904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.407 [2024-05-15 19:32:47.586922] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.667 [2024-05-15 19:32:47.596349] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.667 [2024-05-15 19:32:47.596367] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.667 [2024-05-15 19:32:47.605746] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.667 [2024-05-15 19:32:47.605764] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.667 [2024-05-15 19:32:47.615521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.667 [2024-05-15 19:32:47.615540] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.667 [2024-05-15 19:32:47.624953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.667 [2024-05-15 19:32:47.624975] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.667 [2024-05-15 19:32:47.634083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.667 [2024-05-15 19:32:47.634100] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.643972] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.643989] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.656369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.656396] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.665119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.665137] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.674692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.674710] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.683591] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.683609] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.693827] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.693845] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.703282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.703300] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.712778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.712795] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.722202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.722220] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.731524] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.731542] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.740979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.740997] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.750411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.750428] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.759730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.759748] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.769115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.769133] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.778275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.778293] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.788075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.788093] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.797378] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.797396] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.808441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.808464] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.816883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.816900] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.826674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.826692] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.836374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.836392] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 [2024-05-15 19:32:47.845922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-05-15 19:32:47.845940] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-05-15 19:32:47.854168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-05-15 19:32:47.854185] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 00:17:21.929 Latency(us) 00:17:21.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.929 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:21.929 Nvme1n1 : 5.01 13458.21 105.14 0.00 0.00 9500.19 4287.15 19333.12 00:17:21.929 =================================================================================================================== 00:17:21.929 Total : 13458.21 105.14 0.00 0.00 9500.19 4287.15 19333.12 00:17:21.929 [2024-05-15 19:32:47.860198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-05-15 19:32:47.860214] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-05-15 19:32:47.868219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-05-15 19:32:47.868234] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-05-15 19:32:47.876240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-05-15 19:32:47.876252] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-05-15 19:32:47.884261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-05-15 19:32:47.884274] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-05-15 19:32:47.892282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-05-15 19:32:47.892294] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-05-15 19:32:47.900303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-05-15 19:32:47.900320] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-05-15 19:32:47.908326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-05-15 19:32:47.908336] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-05-15 19:32:47.916347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-05-15 19:32:47.916358] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-05-15 19:32:47.924368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-05-15 19:32:47.924380] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-05-15 19:32:47.932386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-05-15 19:32:47.932397] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-05-15 19:32:47.940408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-05-15 19:32:47.940424] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-05-15 19:32:47.948430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-05-15 19:32:47.948442] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-05-15 19:32:47.956450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-05-15 19:32:47.956462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-05-15 19:32:47.964470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-05-15 19:32:47.964482] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-05-15 19:32:47.972492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-05-15 19:32:47.972503] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-05-15 19:32:47.980511] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-05-15 19:32:47.980521] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-05-15 19:32:47.988532] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-05-15 19:32:47.988542] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3565025) - No such process 00:17:21.929 19:32:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3565025 00:17:21.929 19:32:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:21.929 19:32:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.929 19:32:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:21.929 19:32:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.929 19:32:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:21.929 19:32:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.929 19:32:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:21.929 delay0 00:17:21.929 19:32:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.929 19:32:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:21.929 19:32:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.929 19:32:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:21.929 19:32:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.929 19:32:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:21.929 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.190 [2024-05-15 19:32:48.172515] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:28.858 Initializing NVMe Controllers 00:17:28.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:28.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:28.858 Initialization complete. Launching workers. 00:17:28.858 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 92 00:17:28.858 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 376, failed to submit 36 00:17:28.858 success 158, unsuccess 218, failed 0 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:28.858 rmmod nvme_tcp 00:17:28.858 rmmod nvme_fabrics 00:17:28.858 rmmod nvme_keyring 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3562687 ']' 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3562687 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 3562687 ']' 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 3562687 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3562687 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3562687' 00:17:28.858 killing process with pid 3562687 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 3562687 00:17:28.858 [2024-05-15 19:32:54.443629] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 3562687 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.858 19:32:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.774 19:32:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:30.774 00:17:30.774 real 0m34.451s 00:17:30.774 user 0m45.083s 00:17:30.774 sys 0m10.663s 00:17:30.774 19:32:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:30.774 19:32:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:30.774 ************************************ 00:17:30.774 END TEST nvmf_zcopy 00:17:30.774 ************************************ 00:17:30.774 19:32:56 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:30.774 19:32:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:30.774 19:32:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:30.774 19:32:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:30.774 ************************************ 00:17:30.774 START TEST nvmf_nmic 00:17:30.774 ************************************ 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:30.774 * Looking for test storage... 00:17:30.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:17:30.774 19:32:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:38.923 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:38.923 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:38.923 Found net devices under 0000:31:00.0: cvl_0_0 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:38.923 Found net devices under 0000:31:00.1: cvl_0_1 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:38.923 19:33:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:38.923 19:33:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:38.923 19:33:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:39.184 19:33:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:39.184 19:33:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:39.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:17:39.185 00:17:39.185 --- 10.0.0.2 ping statistics --- 00:17:39.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.185 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:39.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.462 ms 00:17:39.185 00:17:39.185 --- 10.0.0.1 ping statistics --- 00:17:39.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.185 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3572170 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3572170 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 3572170 ']' 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:39.185 19:33:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:39.446 [2024-05-15 19:33:05.373230] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:17:39.446 [2024-05-15 19:33:05.373342] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.446 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.446 [2024-05-15 19:33:05.469495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:39.446 [2024-05-15 19:33:05.568931] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.446 [2024-05-15 19:33:05.568985] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.446 [2024-05-15 19:33:05.568994] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.446 [2024-05-15 19:33:05.569001] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.446 [2024-05-15 19:33:05.569007] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.446 [2024-05-15 19:33:05.569135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.446 [2024-05-15 19:33:05.569284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.446 [2024-05-15 19:33:05.569432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:39.446 [2024-05-15 19:33:05.569435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:40.387 [2024-05-15 19:33:06.296078] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:40.387 Malloc0 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:40.387 [2024-05-15 19:33:06.355267] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:40.387 [2024-05-15 19:33:06.355487] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:40.387 test case1: single bdev can't be used in multiple subsystems 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.387 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:40.387 [2024-05-15 19:33:06.391417] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:40.388 [2024-05-15 19:33:06.391434] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:40.388 [2024-05-15 19:33:06.391441] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.388 request: 00:17:40.388 { 00:17:40.388 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:40.388 "namespace": { 00:17:40.388 "bdev_name": "Malloc0", 00:17:40.388 "no_auto_visible": false 00:17:40.388 }, 00:17:40.388 "method": "nvmf_subsystem_add_ns", 00:17:40.388 "req_id": 1 00:17:40.388 } 00:17:40.388 Got JSON-RPC error response 00:17:40.388 response: 00:17:40.388 { 00:17:40.388 "code": -32602, 00:17:40.388 "message": "Invalid parameters" 00:17:40.388 } 00:17:40.388 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:40.388 19:33:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:17:40.388 19:33:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:40.388 19:33:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:40.388 Adding namespace failed - expected result. 00:17:40.388 19:33:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:40.388 test case2: host connect to nvmf target in multiple paths 00:17:40.388 19:33:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:40.388 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.388 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:40.388 [2024-05-15 19:33:06.403557] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:40.388 19:33:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.388 19:33:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:41.773 19:33:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:43.686 19:33:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:43.686 19:33:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:17:43.686 19:33:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:17:43.686 19:33:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:17:43.686 19:33:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:17:45.626 19:33:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:17:45.626 19:33:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:17:45.626 19:33:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:17:45.626 19:33:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:17:45.626 19:33:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:17:45.626 19:33:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:17:45.626 19:33:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:45.626 [global] 00:17:45.626 thread=1 00:17:45.626 invalidate=1 00:17:45.626 rw=write 00:17:45.626 time_based=1 00:17:45.626 runtime=1 00:17:45.626 ioengine=libaio 00:17:45.626 direct=1 00:17:45.626 bs=4096 00:17:45.626 iodepth=1 00:17:45.626 norandommap=0 00:17:45.626 numjobs=1 00:17:45.626 00:17:45.626 verify_dump=1 00:17:45.626 verify_backlog=512 00:17:45.626 verify_state_save=0 00:17:45.626 do_verify=1 00:17:45.626 verify=crc32c-intel 00:17:45.626 [job0] 00:17:45.627 filename=/dev/nvme0n1 00:17:45.627 Could not set queue depth (nvme0n1) 00:17:45.893 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:45.893 fio-3.35 00:17:45.893 Starting 1 thread 00:17:47.279 00:17:47.279 job0: (groupid=0, jobs=1): err= 0: pid=3574173: Wed May 15 19:33:13 2024 00:17:47.279 read: IOPS=16, BW=66.9KiB/s (68.5kB/s)(68.0KiB/1017msec) 00:17:47.279 slat (nsec): min=25931, max=27725, avg=26524.00, stdev=533.47 00:17:47.279 clat (usec): min=1188, max=42154, avg=39569.06, stdev=9890.90 00:17:47.279 lat (usec): min=1213, max=42180, avg=39595.58, stdev=9891.04 00:17:47.279 clat percentiles (usec): 00:17:47.279 | 1.00th=[ 1188], 5.00th=[ 1188], 10.00th=[41681], 20.00th=[41681], 00:17:47.279 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:47.279 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:47.279 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:47.279 | 99.99th=[42206] 00:17:47.279 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:17:47.279 slat (usec): min=9, max=29064, avg=85.77, stdev=1283.24 00:17:47.279 clat (usec): min=234, max=804, avg=579.60, stdev=101.07 00:17:47.279 lat (usec): min=243, max=29804, avg=665.37, stdev=1294.61 00:17:47.279 clat percentiles (usec): 00:17:47.279 | 1.00th=[ 351], 5.00th=[ 392], 10.00th=[ 433], 20.00th=[ 490], 00:17:47.279 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 603], 00:17:47.279 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 709], 95.00th=[ 734], 00:17:47.279 | 99.00th=[ 758], 99.50th=[ 775], 99.90th=[ 807], 99.95th=[ 807], 00:17:47.279 | 99.99th=[ 807] 00:17:47.279 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:17:47.279 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:47.279 lat (usec) : 250=0.19%, 500=22.50%, 750=71.46%, 1000=2.65% 00:17:47.279 lat (msec) : 2=0.19%, 50=3.02% 00:17:47.279 cpu : usr=0.89%, sys=1.87%, ctx=532, majf=0, minf=1 00:17:47.279 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:47.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:47.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:47.279 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:47.279 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:47.279 00:17:47.279 Run status group 0 (all jobs): 00:17:47.279 READ: bw=66.9KiB/s (68.5kB/s), 66.9KiB/s-66.9KiB/s (68.5kB/s-68.5kB/s), io=68.0KiB (69.6kB), run=1017-1017msec 00:17:47.279 WRITE: bw=2014KiB/s (2062kB/s), 2014KiB/s-2014KiB/s (2062kB/s-2062kB/s), io=2048KiB (2097kB), run=1017-1017msec 00:17:47.279 00:17:47.279 Disk stats (read/write): 00:17:47.279 nvme0n1: ios=66/512, merge=0/0, ticks=1444/234, in_queue=1678, util=98.90% 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:47.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:47.279 rmmod nvme_tcp 00:17:47.279 rmmod nvme_fabrics 00:17:47.279 rmmod nvme_keyring 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3572170 ']' 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3572170 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 3572170 ']' 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 3572170 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3572170 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3572170' 00:17:47.279 killing process with pid 3572170 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 3572170 00:17:47.279 [2024-05-15 19:33:13.354507] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:47.279 19:33:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 3572170 00:17:47.540 19:33:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:47.540 19:33:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:47.540 19:33:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:47.540 19:33:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:47.540 19:33:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:47.540 19:33:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.540 19:33:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:47.540 19:33:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.452 19:33:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:49.452 00:17:49.452 real 0m18.843s 00:17:49.452 user 0m49.570s 00:17:49.452 sys 0m7.222s 00:17:49.452 19:33:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:49.452 19:33:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.452 ************************************ 00:17:49.452 END TEST nvmf_nmic 00:17:49.452 ************************************ 00:17:49.452 19:33:15 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:49.452 19:33:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:49.452 19:33:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:49.452 19:33:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:49.713 ************************************ 00:17:49.713 START TEST nvmf_fio_target 00:17:49.713 ************************************ 00:17:49.713 19:33:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:49.713 * Looking for test storage... 00:17:49.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:49.713 19:33:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:49.713 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:49.713 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.713 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.713 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.713 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.713 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.713 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.713 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.713 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.713 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.713 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.713 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:49.714 19:33:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.853 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:57.854 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:57.854 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:57.854 Found net devices under 0000:31:00.0: cvl_0_0 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:57.854 Found net devices under 0000:31:00.1: cvl_0_1 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:57.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:17:57.854 00:17:57.854 --- 10.0.0.2 ping statistics --- 00:17:57.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.854 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:57.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:17:57.854 00:17:57.854 --- 10.0.0.1 ping statistics --- 00:17:57.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.854 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3579155 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3579155 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 3579155 ']' 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:57.854 19:33:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.855 19:33:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:57.855 19:33:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.855 [2024-05-15 19:33:23.782225] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:17:57.855 [2024-05-15 19:33:23.782284] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.855 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.855 [2024-05-15 19:33:23.878572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:57.855 [2024-05-15 19:33:23.973822] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.855 [2024-05-15 19:33:23.973885] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.855 [2024-05-15 19:33:23.973893] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.855 [2024-05-15 19:33:23.973900] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.855 [2024-05-15 19:33:23.973907] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.855 [2024-05-15 19:33:23.974047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.855 [2024-05-15 19:33:23.974180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.855 [2024-05-15 19:33:23.974363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:57.855 [2024-05-15 19:33:23.974364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.797 19:33:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:58.797 19:33:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:17:58.797 19:33:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:58.797 19:33:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:58.797 19:33:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.797 19:33:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.797 19:33:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:58.797 [2024-05-15 19:33:24.899750] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.797 19:33:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:59.057 19:33:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:59.057 19:33:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:59.317 19:33:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:59.317 19:33:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:59.578 19:33:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:59.578 19:33:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:59.839 19:33:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:59.839 19:33:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:00.099 19:33:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:00.358 19:33:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:00.358 19:33:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:00.358 19:33:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:00.358 19:33:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:00.618 19:33:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:00.618 19:33:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:00.878 19:33:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:01.138 19:33:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:01.138 19:33:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:01.138 19:33:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:01.138 19:33:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:01.399 19:33:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:01.660 [2024-05-15 19:33:27.706296] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:01.660 [2024-05-15 19:33:27.706564] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.660 19:33:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:01.920 19:33:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:02.180 19:33:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:03.564 19:33:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:03.564 19:33:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:18:03.564 19:33:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:03.564 19:33:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:18:03.564 19:33:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:18:03.564 19:33:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:18:06.108 19:33:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:06.108 19:33:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:06.108 19:33:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:06.108 19:33:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:18:06.108 19:33:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:06.108 19:33:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:18:06.108 19:33:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:06.108 [global] 00:18:06.108 thread=1 00:18:06.108 invalidate=1 00:18:06.108 rw=write 00:18:06.108 time_based=1 00:18:06.108 runtime=1 00:18:06.108 ioengine=libaio 00:18:06.108 direct=1 00:18:06.108 bs=4096 00:18:06.108 iodepth=1 00:18:06.108 norandommap=0 00:18:06.108 numjobs=1 00:18:06.108 00:18:06.108 verify_dump=1 00:18:06.108 verify_backlog=512 00:18:06.108 verify_state_save=0 00:18:06.108 do_verify=1 00:18:06.108 verify=crc32c-intel 00:18:06.108 [job0] 00:18:06.108 filename=/dev/nvme0n1 00:18:06.108 [job1] 00:18:06.108 filename=/dev/nvme0n2 00:18:06.108 [job2] 00:18:06.108 filename=/dev/nvme0n3 00:18:06.108 [job3] 00:18:06.108 filename=/dev/nvme0n4 00:18:06.108 Could not set queue depth (nvme0n1) 00:18:06.108 Could not set queue depth (nvme0n2) 00:18:06.108 Could not set queue depth (nvme0n3) 00:18:06.108 Could not set queue depth (nvme0n4) 00:18:06.108 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:06.108 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:06.108 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:06.108 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:06.108 fio-3.35 00:18:06.108 Starting 4 threads 00:18:07.493 00:18:07.493 job0: (groupid=0, jobs=1): err= 0: pid=3580957: Wed May 15 19:33:33 2024 00:18:07.493 read: IOPS=471, BW=1886KiB/s (1931kB/s)(1888KiB/1001msec) 00:18:07.493 slat (nsec): min=7837, max=59288, avg=25262.18, stdev=3939.77 00:18:07.493 clat (usec): min=826, max=1434, avg=1225.09, stdev=67.55 00:18:07.493 lat (usec): min=851, max=1459, avg=1250.35, stdev=68.31 00:18:07.493 clat percentiles (usec): 00:18:07.493 | 1.00th=[ 1004], 5.00th=[ 1106], 10.00th=[ 1139], 20.00th=[ 1188], 00:18:07.493 | 30.00th=[ 1205], 40.00th=[ 1221], 50.00th=[ 1237], 60.00th=[ 1254], 00:18:07.493 | 70.00th=[ 1254], 80.00th=[ 1270], 90.00th=[ 1303], 95.00th=[ 1319], 00:18:07.493 | 99.00th=[ 1369], 99.50th=[ 1385], 99.90th=[ 1434], 99.95th=[ 1434], 00:18:07.493 | 99.99th=[ 1434] 00:18:07.493 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:07.493 slat (usec): min=3, max=3521, avg=35.89, stdev=155.25 00:18:07.493 clat (usec): min=415, max=1196, avg=749.41, stdev=108.84 00:18:07.493 lat (usec): min=420, max=4029, avg=785.31, stdev=183.14 00:18:07.493 clat percentiles (usec): 00:18:07.493 | 1.00th=[ 482], 5.00th=[ 545], 10.00th=[ 594], 20.00th=[ 668], 00:18:07.493 | 30.00th=[ 701], 40.00th=[ 725], 50.00th=[ 758], 60.00th=[ 799], 00:18:07.493 | 70.00th=[ 816], 80.00th=[ 848], 90.00th=[ 865], 95.00th=[ 898], 00:18:07.493 | 99.00th=[ 963], 99.50th=[ 988], 99.90th=[ 1205], 99.95th=[ 1205], 00:18:07.493 | 99.99th=[ 1205] 00:18:07.493 bw ( KiB/s): min= 4096, max= 4096, per=50.80%, avg=4096.00, stdev= 0.00, samples=1 00:18:07.493 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:07.493 lat (usec) : 500=0.91%, 750=23.17%, 1000=28.15% 00:18:07.493 lat (msec) : 2=47.76% 00:18:07.493 cpu : usr=1.10%, sys=3.10%, ctx=987, majf=0, minf=1 00:18:07.493 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.493 issued rwts: total=472,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.493 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:07.493 job1: (groupid=0, jobs=1): err= 0: pid=3580967: Wed May 15 19:33:33 2024 00:18:07.493 read: IOPS=14, BW=59.1KiB/s (60.5kB/s)(60.0KiB/1016msec) 00:18:07.493 slat (nsec): min=25235, max=25755, avg=25443.73, stdev=136.60 00:18:07.493 clat (usec): min=1178, max=43049, avg=39469.83, stdev=10604.76 00:18:07.493 lat (usec): min=1203, max=43075, avg=39495.27, stdev=10604.78 00:18:07.493 clat percentiles (usec): 00:18:07.493 | 1.00th=[ 1172], 5.00th=[ 1172], 10.00th=[41681], 20.00th=[41681], 00:18:07.493 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:07.493 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:18:07.493 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:18:07.493 | 99.99th=[43254] 00:18:07.493 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:18:07.493 slat (usec): min=3, max=21941, avg=75.81, stdev=971.45 00:18:07.493 clat (usec): min=386, max=1104, avg=743.58, stdev=112.19 00:18:07.493 lat (usec): min=416, max=22546, avg=819.39, stdev=972.00 00:18:07.493 clat percentiles (usec): 00:18:07.493 | 1.00th=[ 457], 5.00th=[ 537], 10.00th=[ 594], 20.00th=[ 652], 00:18:07.493 | 30.00th=[ 701], 40.00th=[ 725], 50.00th=[ 750], 60.00th=[ 783], 00:18:07.493 | 70.00th=[ 816], 80.00th=[ 840], 90.00th=[ 881], 95.00th=[ 898], 00:18:07.493 | 99.00th=[ 947], 99.50th=[ 1004], 99.90th=[ 1106], 99.95th=[ 1106], 00:18:07.493 | 99.99th=[ 1106] 00:18:07.493 bw ( KiB/s): min= 4096, max= 4096, per=50.80%, avg=4096.00, stdev= 0.00, samples=1 00:18:07.493 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:07.493 lat (usec) : 500=3.04%, 750=44.97%, 1000=48.58% 00:18:07.493 lat (msec) : 2=0.76%, 50=2.66% 00:18:07.493 cpu : usr=0.59%, sys=1.58%, ctx=530, majf=0, minf=1 00:18:07.493 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.493 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.493 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:07.493 job2: (groupid=0, jobs=1): err= 0: pid=3580987: Wed May 15 19:33:33 2024 00:18:07.493 read: IOPS=14, BW=59.6KiB/s (61.0kB/s)(60.0KiB/1007msec) 00:18:07.493 slat (nsec): min=25252, max=25829, avg=25468.13, stdev=193.54 00:18:07.493 clat (usec): min=1291, max=42960, avg=39370.26, stdev=10539.00 00:18:07.493 lat (usec): min=1316, max=42986, avg=39395.72, stdev=10538.95 00:18:07.493 clat percentiles (usec): 00:18:07.493 | 1.00th=[ 1287], 5.00th=[ 1287], 10.00th=[41681], 20.00th=[41681], 00:18:07.493 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:18:07.493 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:18:07.493 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:07.493 | 99.99th=[42730] 00:18:07.493 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:18:07.493 slat (usec): min=10, max=2572, avg=38.85, stdev=121.93 00:18:07.493 clat (usec): min=411, max=1008, avg=765.75, stdev=105.02 00:18:07.493 lat (usec): min=439, max=3420, avg=804.60, stdev=167.34 00:18:07.493 clat percentiles (usec): 00:18:07.493 | 1.00th=[ 482], 5.00th=[ 578], 10.00th=[ 627], 20.00th=[ 693], 00:18:07.493 | 30.00th=[ 717], 40.00th=[ 750], 50.00th=[ 775], 60.00th=[ 799], 00:18:07.493 | 70.00th=[ 824], 80.00th=[ 848], 90.00th=[ 889], 95.00th=[ 922], 00:18:07.493 | 99.00th=[ 963], 99.50th=[ 988], 99.90th=[ 1012], 99.95th=[ 1012], 00:18:07.493 | 99.99th=[ 1012] 00:18:07.493 bw ( KiB/s): min= 4096, max= 4096, per=50.80%, avg=4096.00, stdev= 0.00, samples=1 00:18:07.493 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:07.493 lat (usec) : 500=1.90%, 750=37.95%, 1000=57.12% 00:18:07.493 lat (msec) : 2=0.38%, 50=2.66% 00:18:07.493 cpu : usr=0.80%, sys=1.59%, ctx=530, majf=0, minf=1 00:18:07.493 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.493 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.493 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:07.493 job3: (groupid=0, jobs=1): err= 0: pid=3580994: Wed May 15 19:33:33 2024 00:18:07.493 read: IOPS=15, BW=63.9KiB/s (65.4kB/s)(64.0KiB/1002msec) 00:18:07.493 slat (nsec): min=25241, max=25971, avg=25454.94, stdev=220.33 00:18:07.493 clat (usec): min=4453, max=43032, avg=39653.70, stdev=9408.05 00:18:07.493 lat (usec): min=4479, max=43058, avg=39679.15, stdev=9408.10 00:18:07.493 clat percentiles (usec): 00:18:07.493 | 1.00th=[ 4424], 5.00th=[ 4424], 10.00th=[41157], 20.00th=[41157], 00:18:07.493 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:18:07.493 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:18:07.493 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:18:07.493 | 99.99th=[43254] 00:18:07.493 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:18:07.493 slat (nsec): min=4554, max=63615, avg=26740.30, stdev=11573.70 00:18:07.493 clat (usec): min=191, max=1025, avg=683.38, stdev=149.65 00:18:07.493 lat (usec): min=205, max=1062, avg=710.12, stdev=155.30 00:18:07.493 clat percentiles (usec): 00:18:07.493 | 1.00th=[ 334], 5.00th=[ 437], 10.00th=[ 486], 20.00th=[ 545], 00:18:07.493 | 30.00th=[ 594], 40.00th=[ 660], 50.00th=[ 701], 60.00th=[ 725], 00:18:07.493 | 70.00th=[ 766], 80.00th=[ 816], 90.00th=[ 881], 95.00th=[ 906], 00:18:07.493 | 99.00th=[ 979], 99.50th=[ 1012], 99.90th=[ 1029], 99.95th=[ 1029], 00:18:07.493 | 99.99th=[ 1029] 00:18:07.493 bw ( KiB/s): min= 4096, max= 4096, per=50.80%, avg=4096.00, stdev= 0.00, samples=1 00:18:07.493 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:07.493 lat (usec) : 250=0.38%, 500=11.36%, 750=52.27%, 1000=32.20% 00:18:07.493 lat (msec) : 2=0.76%, 10=0.19%, 50=2.84% 00:18:07.493 cpu : usr=1.00%, sys=0.90%, ctx=530, majf=0, minf=1 00:18:07.493 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.493 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.493 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:07.493 00:18:07.493 Run status group 0 (all jobs): 00:18:07.493 READ: bw=2039KiB/s (2088kB/s), 59.1KiB/s-1886KiB/s (60.5kB/s-1931kB/s), io=2072KiB (2122kB), run=1001-1016msec 00:18:07.493 WRITE: bw=8063KiB/s (8257kB/s), 2016KiB/s-2046KiB/s (2064kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1016msec 00:18:07.493 00:18:07.493 Disk stats (read/write): 00:18:07.493 nvme0n1: ios=390/512, merge=0/0, ticks=508/357, in_queue=865, util=84.07% 00:18:07.493 nvme0n2: ios=63/512, merge=0/0, ticks=562/349, in_queue=911, util=88.76% 00:18:07.493 nvme0n3: ios=67/512, merge=0/0, ticks=547/371, in_queue=918, util=95.03% 00:18:07.493 nvme0n4: ios=74/512, merge=0/0, ticks=1072/325, in_queue=1397, util=96.58% 00:18:07.493 19:33:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:07.493 [global] 00:18:07.493 thread=1 00:18:07.493 invalidate=1 00:18:07.493 rw=randwrite 00:18:07.493 time_based=1 00:18:07.493 runtime=1 00:18:07.493 ioengine=libaio 00:18:07.493 direct=1 00:18:07.493 bs=4096 00:18:07.493 iodepth=1 00:18:07.493 norandommap=0 00:18:07.493 numjobs=1 00:18:07.493 00:18:07.493 verify_dump=1 00:18:07.493 verify_backlog=512 00:18:07.493 verify_state_save=0 00:18:07.493 do_verify=1 00:18:07.493 verify=crc32c-intel 00:18:07.493 [job0] 00:18:07.493 filename=/dev/nvme0n1 00:18:07.493 [job1] 00:18:07.493 filename=/dev/nvme0n2 00:18:07.493 [job2] 00:18:07.493 filename=/dev/nvme0n3 00:18:07.493 [job3] 00:18:07.493 filename=/dev/nvme0n4 00:18:07.493 Could not set queue depth (nvme0n1) 00:18:07.493 Could not set queue depth (nvme0n2) 00:18:07.493 Could not set queue depth (nvme0n3) 00:18:07.493 Could not set queue depth (nvme0n4) 00:18:07.754 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:07.754 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:07.754 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:07.754 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:07.754 fio-3.35 00:18:07.754 Starting 4 threads 00:18:09.139 00:18:09.139 job0: (groupid=0, jobs=1): err= 0: pid=3581452: Wed May 15 19:33:35 2024 00:18:09.139 read: IOPS=18, BW=75.0KiB/s (76.8kB/s)(76.0KiB/1013msec) 00:18:09.139 slat (nsec): min=10331, max=25780, avg=24698.21, stdev=3483.12 00:18:09.139 clat (usec): min=40962, max=42100, avg=41903.91, stdev=238.19 00:18:09.139 lat (usec): min=40988, max=42125, avg=41928.61, stdev=238.22 00:18:09.139 clat percentiles (usec): 00:18:09.139 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:18:09.139 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:18:09.139 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:09.139 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:09.139 | 99.99th=[42206] 00:18:09.139 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:18:09.139 slat (nsec): min=4443, max=56683, avg=28924.36, stdev=9014.55 00:18:09.139 clat (usec): min=164, max=758, avg=384.31, stdev=123.44 00:18:09.139 lat (usec): min=174, max=762, avg=413.24, stdev=125.64 00:18:09.139 clat percentiles (usec): 00:18:09.139 | 1.00th=[ 169], 5.00th=[ 188], 10.00th=[ 237], 20.00th=[ 289], 00:18:09.139 | 30.00th=[ 306], 40.00th=[ 326], 50.00th=[ 363], 60.00th=[ 408], 00:18:09.139 | 70.00th=[ 441], 80.00th=[ 486], 90.00th=[ 578], 95.00th=[ 611], 00:18:09.139 | 99.00th=[ 676], 99.50th=[ 717], 99.90th=[ 758], 99.95th=[ 758], 00:18:09.139 | 99.99th=[ 758] 00:18:09.139 bw ( KiB/s): min= 4096, max= 4096, per=51.60%, avg=4096.00, stdev= 0.00, samples=1 00:18:09.139 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:09.139 lat (usec) : 250=9.98%, 500=69.49%, 750=16.76%, 1000=0.19% 00:18:09.139 lat (msec) : 50=3.58% 00:18:09.139 cpu : usr=0.89%, sys=1.28%, ctx=535, majf=0, minf=1 00:18:09.139 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:09.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.140 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:09.140 job1: (groupid=0, jobs=1): err= 0: pid=3581468: Wed May 15 19:33:35 2024 00:18:09.140 read: IOPS=505, BW=2024KiB/s (2073kB/s)(2028KiB/1002msec) 00:18:09.140 slat (nsec): min=23294, max=53724, avg=24423.02, stdev=2845.95 00:18:09.140 clat (usec): min=916, max=1375, avg=1124.99, stdev=58.69 00:18:09.140 lat (usec): min=940, max=1398, avg=1149.41, stdev=58.62 00:18:09.140 clat percentiles (usec): 00:18:09.140 | 1.00th=[ 988], 5.00th=[ 1020], 10.00th=[ 1057], 20.00th=[ 1074], 00:18:09.140 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1123], 60.00th=[ 1139], 00:18:09.140 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1221], 00:18:09.140 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[ 1369], 99.95th=[ 1369], 00:18:09.140 | 99.99th=[ 1369] 00:18:09.140 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:18:09.140 slat (nsec): min=8848, max=47471, avg=27172.66, stdev=7856.85 00:18:09.140 clat (usec): min=347, max=1080, avg=773.30, stdev=107.32 00:18:09.140 lat (usec): min=358, max=1109, avg=800.48, stdev=110.51 00:18:09.140 clat percentiles (usec): 00:18:09.140 | 1.00th=[ 502], 5.00th=[ 586], 10.00th=[ 627], 20.00th=[ 693], 00:18:09.140 | 30.00th=[ 725], 40.00th=[ 758], 50.00th=[ 775], 60.00th=[ 807], 00:18:09.140 | 70.00th=[ 824], 80.00th=[ 857], 90.00th=[ 906], 95.00th=[ 947], 00:18:09.140 | 99.00th=[ 1020], 99.50th=[ 1057], 99.90th=[ 1074], 99.95th=[ 1074], 00:18:09.140 | 99.99th=[ 1074] 00:18:09.140 bw ( KiB/s): min= 4096, max= 4096, per=51.60%, avg=4096.00, stdev= 0.00, samples=1 00:18:09.140 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:09.140 lat (usec) : 500=0.49%, 750=19.04%, 1000=31.50% 00:18:09.140 lat (msec) : 2=48.97% 00:18:09.140 cpu : usr=1.70%, sys=2.60%, ctx=1019, majf=0, minf=1 00:18:09.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:09.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.140 issued rwts: total=507,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:09.140 job2: (groupid=0, jobs=1): err= 0: pid=3581487: Wed May 15 19:33:35 2024 00:18:09.140 read: IOPS=14, BW=58.1KiB/s (59.5kB/s)(60.0KiB/1032msec) 00:18:09.140 slat (nsec): min=25877, max=27673, avg=26279.87, stdev=434.66 00:18:09.140 clat (usec): min=1207, max=42973, avg=39599.84, stdev=10628.94 00:18:09.140 lat (usec): min=1233, max=42999, avg=39626.12, stdev=10628.93 00:18:09.140 clat percentiles (usec): 00:18:09.140 | 1.00th=[ 1205], 5.00th=[ 1205], 10.00th=[41681], 20.00th=[42206], 00:18:09.140 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:09.140 | 70.00th=[42730], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:18:09.140 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:09.140 | 99.99th=[42730] 00:18:09.140 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:18:09.140 slat (nsec): min=9192, max=64629, avg=31742.02, stdev=7424.72 00:18:09.140 clat (usec): min=418, max=1226, avg=814.82, stdev=125.06 00:18:09.140 lat (usec): min=428, max=1273, avg=846.56, stdev=126.79 00:18:09.140 clat percentiles (usec): 00:18:09.140 | 1.00th=[ 523], 5.00th=[ 594], 10.00th=[ 660], 20.00th=[ 717], 00:18:09.140 | 30.00th=[ 750], 40.00th=[ 783], 50.00th=[ 816], 60.00th=[ 848], 00:18:09.140 | 70.00th=[ 873], 80.00th=[ 922], 90.00th=[ 963], 95.00th=[ 1020], 00:18:09.140 | 99.00th=[ 1090], 99.50th=[ 1188], 99.90th=[ 1221], 99.95th=[ 1221], 00:18:09.140 | 99.99th=[ 1221] 00:18:09.140 bw ( KiB/s): min= 4096, max= 4096, per=51.60%, avg=4096.00, stdev= 0.00, samples=1 00:18:09.140 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:09.140 lat (usec) : 500=0.76%, 750=27.32%, 1000=62.81% 00:18:09.140 lat (msec) : 2=6.45%, 50=2.66% 00:18:09.140 cpu : usr=0.87%, sys=2.33%, ctx=527, majf=0, minf=1 00:18:09.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:09.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.140 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:09.140 job3: (groupid=0, jobs=1): err= 0: pid=3581495: Wed May 15 19:33:35 2024 00:18:09.140 read: IOPS=420, BW=1681KiB/s (1722kB/s)(1720KiB/1023msec) 00:18:09.140 slat (nsec): min=9390, max=44327, avg=26348.63, stdev=2912.69 00:18:09.140 clat (usec): min=965, max=42347, avg=1471.63, stdev=3411.52 00:18:09.140 lat (usec): min=991, max=42373, avg=1497.98, stdev=3411.52 00:18:09.140 clat percentiles (usec): 00:18:09.140 | 1.00th=[ 996], 5.00th=[ 1057], 10.00th=[ 1106], 20.00th=[ 1139], 00:18:09.140 | 30.00th=[ 1156], 40.00th=[ 1172], 50.00th=[ 1188], 60.00th=[ 1205], 00:18:09.140 | 70.00th=[ 1221], 80.00th=[ 1237], 90.00th=[ 1254], 95.00th=[ 1287], 00:18:09.140 | 99.00th=[ 1352], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:09.140 | 99.99th=[42206] 00:18:09.140 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:18:09.140 slat (nsec): min=5041, max=57038, avg=22958.26, stdev=11414.99 00:18:09.140 clat (usec): min=377, max=979, avg=703.36, stdev=94.78 00:18:09.140 lat (usec): min=387, max=1016, avg=726.32, stdev=101.10 00:18:09.140 clat percentiles (usec): 00:18:09.140 | 1.00th=[ 478], 5.00th=[ 553], 10.00th=[ 586], 20.00th=[ 619], 00:18:09.140 | 30.00th=[ 652], 40.00th=[ 676], 50.00th=[ 701], 60.00th=[ 734], 00:18:09.140 | 70.00th=[ 758], 80.00th=[ 791], 90.00th=[ 824], 95.00th=[ 857], 00:18:09.140 | 99.00th=[ 922], 99.50th=[ 930], 99.90th=[ 979], 99.95th=[ 979], 00:18:09.140 | 99.99th=[ 979] 00:18:09.140 bw ( KiB/s): min= 4096, max= 4096, per=51.60%, avg=4096.00, stdev= 0.00, samples=1 00:18:09.140 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:09.140 lat (usec) : 500=0.85%, 750=35.67%, 1000=18.47% 00:18:09.140 lat (msec) : 2=44.69%, 50=0.32% 00:18:09.140 cpu : usr=2.35%, sys=2.45%, ctx=942, majf=0, minf=1 00:18:09.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:09.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.140 issued rwts: total=430,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:09.140 00:18:09.140 Run status group 0 (all jobs): 00:18:09.140 READ: bw=3764KiB/s (3854kB/s), 58.1KiB/s-2024KiB/s (59.5kB/s-2073kB/s), io=3884KiB (3977kB), run=1002-1032msec 00:18:09.140 WRITE: bw=7938KiB/s (8128kB/s), 1984KiB/s-2044KiB/s (2032kB/s-2093kB/s), io=8192KiB (8389kB), run=1002-1032msec 00:18:09.140 00:18:09.140 Disk stats (read/write): 00:18:09.140 nvme0n1: ios=59/512, merge=0/0, ticks=819/190, in_queue=1009, util=99.60% 00:18:09.140 nvme0n2: ios=405/512, merge=0/0, ticks=537/378, in_queue=915, util=92.56% 00:18:09.140 nvme0n3: ios=27/512, merge=0/0, ticks=856/331, in_queue=1187, util=92.19% 00:18:09.140 nvme0n4: ios=406/512, merge=0/0, ticks=694/328, in_queue=1022, util=90.07% 00:18:09.140 19:33:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:09.140 [global] 00:18:09.140 thread=1 00:18:09.140 invalidate=1 00:18:09.140 rw=write 00:18:09.140 time_based=1 00:18:09.140 runtime=1 00:18:09.140 ioengine=libaio 00:18:09.140 direct=1 00:18:09.140 bs=4096 00:18:09.140 iodepth=128 00:18:09.140 norandommap=0 00:18:09.140 numjobs=1 00:18:09.140 00:18:09.140 verify_dump=1 00:18:09.140 verify_backlog=512 00:18:09.140 verify_state_save=0 00:18:09.140 do_verify=1 00:18:09.140 verify=crc32c-intel 00:18:09.140 [job0] 00:18:09.140 filename=/dev/nvme0n1 00:18:09.140 [job1] 00:18:09.140 filename=/dev/nvme0n2 00:18:09.140 [job2] 00:18:09.140 filename=/dev/nvme0n3 00:18:09.140 [job3] 00:18:09.140 filename=/dev/nvme0n4 00:18:09.140 Could not set queue depth (nvme0n1) 00:18:09.140 Could not set queue depth (nvme0n2) 00:18:09.140 Could not set queue depth (nvme0n3) 00:18:09.140 Could not set queue depth (nvme0n4) 00:18:09.400 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:09.400 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:09.400 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:09.400 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:09.400 fio-3.35 00:18:09.400 Starting 4 threads 00:18:10.878 00:18:10.878 job0: (groupid=0, jobs=1): err= 0: pid=3581947: Wed May 15 19:33:36 2024 00:18:10.878 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:18:10.878 slat (nsec): min=1294, max=13821k, avg=177591.39, stdev=1085558.09 00:18:10.878 clat (usec): min=3250, max=57499, avg=23281.80, stdev=12628.11 00:18:10.878 lat (usec): min=5811, max=57508, avg=23459.39, stdev=12699.85 00:18:10.878 clat percentiles (usec): 00:18:10.878 | 1.00th=[ 5932], 5.00th=[ 9896], 10.00th=[11994], 20.00th=[13173], 00:18:10.878 | 30.00th=[15270], 40.00th=[17171], 50.00th=[19006], 60.00th=[20579], 00:18:10.878 | 70.00th=[24249], 80.00th=[34866], 90.00th=[43779], 95.00th=[51119], 00:18:10.878 | 99.00th=[56361], 99.50th=[56361], 99.90th=[57410], 99.95th=[57410], 00:18:10.878 | 99.99th=[57410] 00:18:10.878 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:18:10.878 slat (usec): min=2, max=10828, avg=121.35, stdev=744.23 00:18:10.878 clat (usec): min=1270, max=41211, avg=15554.49, stdev=8296.73 00:18:10.878 lat (usec): min=1282, max=41222, avg=15675.84, stdev=8350.16 00:18:10.878 clat percentiles (usec): 00:18:10.878 | 1.00th=[ 4752], 5.00th=[ 5669], 10.00th=[ 6390], 20.00th=[ 8717], 00:18:10.878 | 30.00th=[10159], 40.00th=[11863], 50.00th=[13173], 60.00th=[15008], 00:18:10.878 | 70.00th=[17695], 80.00th=[23462], 90.00th=[29230], 95.00th=[32113], 00:18:10.878 | 99.00th=[36963], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:10.878 | 99.99th=[41157] 00:18:10.878 bw ( KiB/s): min=10888, max=16808, per=17.88%, avg=13848.00, stdev=4186.07, samples=2 00:18:10.878 iops : min= 2722, max= 4202, avg=3462.00, stdev=1046.52, samples=2 00:18:10.878 lat (msec) : 2=0.21%, 4=0.27%, 10=17.08%, 20=49.22%, 50=30.26% 00:18:10.878 lat (msec) : 100=2.96% 00:18:10.878 cpu : usr=2.59%, sys=3.39%, ctx=276, majf=0, minf=1 00:18:10.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:10.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:10.879 issued rwts: total=3078,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.879 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:10.879 job1: (groupid=0, jobs=1): err= 0: pid=3581960: Wed May 15 19:33:36 2024 00:18:10.879 read: IOPS=4795, BW=18.7MiB/s (19.6MB/s)(18.8MiB/1005msec) 00:18:10.879 slat (nsec): min=1284, max=22633k, avg=99781.85, stdev=742491.40 00:18:10.879 clat (usec): min=3146, max=68770, avg=13179.42, stdev=7692.37 00:18:10.879 lat (usec): min=3155, max=68772, avg=13279.21, stdev=7743.61 00:18:10.879 clat percentiles (usec): 00:18:10.879 | 1.00th=[ 5145], 5.00th=[ 6128], 10.00th=[ 7373], 20.00th=[ 8455], 00:18:10.879 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10421], 60.00th=[11207], 00:18:10.879 | 70.00th=[13566], 80.00th=[16057], 90.00th=[21627], 95.00th=[33162], 00:18:10.879 | 99.00th=[41157], 99.50th=[41157], 99.90th=[57934], 99.95th=[57934], 00:18:10.879 | 99.99th=[68682] 00:18:10.879 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:18:10.879 slat (usec): min=2, max=34468, avg=96.33, stdev=711.70 00:18:10.879 clat (usec): min=3764, max=48743, avg=11957.77, stdev=6960.66 00:18:10.879 lat (usec): min=3772, max=48751, avg=12054.10, stdev=7017.86 00:18:10.879 clat percentiles (usec): 00:18:10.879 | 1.00th=[ 5080], 5.00th=[ 6259], 10.00th=[ 7439], 20.00th=[ 7963], 00:18:10.879 | 30.00th=[ 8356], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:18:10.879 | 70.00th=[10683], 80.00th=[13173], 90.00th=[22676], 95.00th=[27132], 00:18:10.879 | 99.00th=[42730], 99.50th=[45351], 99.90th=[48497], 99.95th=[48497], 00:18:10.879 | 99.99th=[48497] 00:18:10.879 bw ( KiB/s): min=16384, max=24625, per=26.48%, avg=20504.50, stdev=5827.27, samples=2 00:18:10.879 iops : min= 4096, max= 6156, avg=5126.00, stdev=1456.64, samples=2 00:18:10.879 lat (msec) : 4=0.35%, 10=44.35%, 20=44.06%, 50=11.12%, 100=0.12% 00:18:10.879 cpu : usr=3.39%, sys=5.18%, ctx=421, majf=0, minf=1 00:18:10.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:10.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:10.879 issued rwts: total=4819,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.879 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:10.879 job2: (groupid=0, jobs=1): err= 0: pid=3581978: Wed May 15 19:33:36 2024 00:18:10.879 read: IOPS=6078, BW=23.7MiB/s (24.9MB/s)(23.9MiB/1005msec) 00:18:10.879 slat (nsec): min=1324, max=12632k, avg=82643.61, stdev=594454.40 00:18:10.879 clat (usec): min=1635, max=21418, avg=11467.75, stdev=2943.58 00:18:10.879 lat (usec): min=1675, max=21448, avg=11550.39, stdev=2968.06 00:18:10.879 clat percentiles (usec): 00:18:10.879 | 1.00th=[ 4178], 5.00th=[ 6718], 10.00th=[ 8455], 20.00th=[ 9765], 00:18:10.879 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11076], 60.00th=[11469], 00:18:10.879 | 70.00th=[12256], 80.00th=[13829], 90.00th=[15139], 95.00th=[17171], 00:18:10.879 | 99.00th=[20055], 99.50th=[20579], 99.90th=[20579], 99.95th=[21103], 00:18:10.879 | 99.99th=[21365] 00:18:10.879 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:18:10.879 slat (usec): min=2, max=8753, avg=66.97, stdev=396.73 00:18:10.879 clat (usec): min=864, max=21233, avg=9270.32, stdev=3004.28 00:18:10.879 lat (usec): min=873, max=21239, avg=9337.28, stdev=3016.87 00:18:10.879 clat percentiles (usec): 00:18:10.879 | 1.00th=[ 1696], 5.00th=[ 4015], 10.00th=[ 5014], 20.00th=[ 6587], 00:18:10.879 | 30.00th=[ 7767], 40.00th=[ 8586], 50.00th=[10159], 60.00th=[10814], 00:18:10.879 | 70.00th=[11207], 80.00th=[11600], 90.00th=[12387], 95.00th=[13698], 00:18:10.879 | 99.00th=[14877], 99.50th=[15139], 99.90th=[16581], 99.95th=[19530], 00:18:10.879 | 99.99th=[21103] 00:18:10.879 bw ( KiB/s): min=24576, max=24576, per=31.74%, avg=24576.00, stdev= 0.00, samples=2 00:18:10.879 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:18:10.879 lat (usec) : 1000=0.02% 00:18:10.879 lat (msec) : 2=0.71%, 4=2.21%, 10=34.90%, 20=61.63%, 50=0.52% 00:18:10.879 cpu : usr=5.58%, sys=6.47%, ctx=590, majf=0, minf=1 00:18:10.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:10.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:10.879 issued rwts: total=6109,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.879 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:10.879 job3: (groupid=0, jobs=1): err= 0: pid=3581984: Wed May 15 19:33:36 2024 00:18:10.879 read: IOPS=4402, BW=17.2MiB/s (18.0MB/s)(17.3MiB/1005msec) 00:18:10.879 slat (nsec): min=1326, max=13672k, avg=111458.26, stdev=721094.63 00:18:10.879 clat (usec): min=3380, max=37267, avg=14318.45, stdev=4497.16 00:18:10.879 lat (usec): min=3828, max=37269, avg=14429.91, stdev=4542.79 00:18:10.879 clat percentiles (usec): 00:18:10.879 | 1.00th=[ 5211], 5.00th=[ 8291], 10.00th=[ 9241], 20.00th=[10945], 00:18:10.879 | 30.00th=[11994], 40.00th=[12780], 50.00th=[13960], 60.00th=[14877], 00:18:10.879 | 70.00th=[16057], 80.00th=[17433], 90.00th=[19530], 95.00th=[21365], 00:18:10.879 | 99.00th=[31589], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:18:10.879 | 99.99th=[37487] 00:18:10.879 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:18:10.879 slat (usec): min=2, max=8795, avg=104.64, stdev=648.97 00:18:10.879 clat (usec): min=3019, max=32798, avg=13746.35, stdev=4907.38 00:18:10.879 lat (usec): min=3198, max=32809, avg=13850.99, stdev=4956.57 00:18:10.879 clat percentiles (usec): 00:18:10.879 | 1.00th=[ 3884], 5.00th=[ 6456], 10.00th=[ 8848], 20.00th=[ 9634], 00:18:10.879 | 30.00th=[11469], 40.00th=[12256], 50.00th=[13435], 60.00th=[13960], 00:18:10.879 | 70.00th=[14877], 80.00th=[16909], 90.00th=[20055], 95.00th=[22414], 00:18:10.879 | 99.00th=[30278], 99.50th=[32637], 99.90th=[32900], 99.95th=[32900], 00:18:10.879 | 99.99th=[32900] 00:18:10.879 bw ( KiB/s): min=17114, max=19784, per=23.82%, avg=18449.00, stdev=1887.98, samples=2 00:18:10.879 iops : min= 4278, max= 4946, avg=4612.00, stdev=472.35, samples=2 00:18:10.879 lat (msec) : 4=1.05%, 10=17.76%, 20=72.18%, 50=9.01% 00:18:10.879 cpu : usr=3.29%, sys=4.88%, ctx=343, majf=0, minf=1 00:18:10.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:10.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:10.879 issued rwts: total=4425,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.879 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:10.879 00:18:10.879 Run status group 0 (all jobs): 00:18:10.879 READ: bw=71.6MiB/s (75.1MB/s), 12.0MiB/s-23.7MiB/s (12.5MB/s-24.9MB/s), io=72.0MiB (75.5MB), run=1005-1005msec 00:18:10.879 WRITE: bw=75.6MiB/s (79.3MB/s), 13.9MiB/s-23.9MiB/s (14.6MB/s-25.0MB/s), io=76.0MiB (79.7MB), run=1005-1005msec 00:18:10.879 00:18:10.879 Disk stats (read/write): 00:18:10.879 nvme0n1: ios=2394/2560, merge=0/0, ticks=21130/17930, in_queue=39060, util=84.67% 00:18:10.879 nvme0n2: ios=4297/4608, merge=0/0, ticks=24847/20516, in_queue=45363, util=90.33% 00:18:10.879 nvme0n3: ios=4891/5120, merge=0/0, ticks=48592/42327, in_queue=90919, util=95.16% 00:18:10.879 nvme0n4: ios=3704/4096, merge=0/0, ticks=25893/24646, in_queue=50539, util=94.04% 00:18:10.879 19:33:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:10.879 [global] 00:18:10.879 thread=1 00:18:10.879 invalidate=1 00:18:10.879 rw=randwrite 00:18:10.879 time_based=1 00:18:10.879 runtime=1 00:18:10.879 ioengine=libaio 00:18:10.879 direct=1 00:18:10.879 bs=4096 00:18:10.879 iodepth=128 00:18:10.879 norandommap=0 00:18:10.879 numjobs=1 00:18:10.879 00:18:10.879 verify_dump=1 00:18:10.879 verify_backlog=512 00:18:10.879 verify_state_save=0 00:18:10.879 do_verify=1 00:18:10.879 verify=crc32c-intel 00:18:10.879 [job0] 00:18:10.879 filename=/dev/nvme0n1 00:18:10.879 [job1] 00:18:10.879 filename=/dev/nvme0n2 00:18:10.879 [job2] 00:18:10.879 filename=/dev/nvme0n3 00:18:10.879 [job3] 00:18:10.879 filename=/dev/nvme0n4 00:18:10.879 Could not set queue depth (nvme0n1) 00:18:10.879 Could not set queue depth (nvme0n2) 00:18:10.879 Could not set queue depth (nvme0n3) 00:18:10.879 Could not set queue depth (nvme0n4) 00:18:11.142 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:11.142 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:11.143 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:11.143 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:11.143 fio-3.35 00:18:11.143 Starting 4 threads 00:18:12.554 00:18:12.554 job0: (groupid=0, jobs=1): err= 0: pid=3582418: Wed May 15 19:33:38 2024 00:18:12.554 read: IOPS=2744, BW=10.7MiB/s (11.2MB/s)(10.8MiB/1003msec) 00:18:12.554 slat (nsec): min=1220, max=45026k, avg=202699.22, stdev=1358200.08 00:18:12.554 clat (usec): min=1359, max=70015, avg=24550.76, stdev=12146.56 00:18:12.554 lat (usec): min=2900, max=70021, avg=24753.46, stdev=12245.26 00:18:12.554 clat percentiles (usec): 00:18:12.554 | 1.00th=[ 5669], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[15401], 00:18:12.554 | 30.00th=[17433], 40.00th=[21103], 50.00th=[22938], 60.00th=[25297], 00:18:12.554 | 70.00th=[28181], 80.00th=[31065], 90.00th=[45876], 95.00th=[47973], 00:18:12.554 | 99.00th=[61604], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:18:12.554 | 99.99th=[69731] 00:18:12.554 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:18:12.554 slat (usec): min=2, max=13231, avg=138.63, stdev=855.56 00:18:12.554 clat (usec): min=1160, max=56890, avg=19430.58, stdev=11583.31 00:18:12.554 lat (usec): min=1172, max=56902, avg=19569.22, stdev=11660.89 00:18:12.554 clat percentiles (usec): 00:18:12.554 | 1.00th=[ 6325], 5.00th=[ 7898], 10.00th=[ 8586], 20.00th=[11600], 00:18:12.554 | 30.00th=[12518], 40.00th=[13042], 50.00th=[14484], 60.00th=[16712], 00:18:12.554 | 70.00th=[22676], 80.00th=[27395], 90.00th=[35390], 95.00th=[46400], 00:18:12.554 | 99.00th=[54264], 99.50th=[56361], 99.90th=[56886], 99.95th=[56886], 00:18:12.554 | 99.99th=[56886] 00:18:12.554 bw ( KiB/s): min=12288, max=12288, per=17.33%, avg=12288.00, stdev= 0.00, samples=2 00:18:12.554 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:18:12.554 lat (msec) : 2=0.05%, 4=0.10%, 10=14.13%, 20=35.97%, 50=46.25% 00:18:12.554 lat (msec) : 100=3.50% 00:18:12.554 cpu : usr=2.30%, sys=2.99%, ctx=246, majf=0, minf=1 00:18:12.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:12.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:12.554 issued rwts: total=2753,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.554 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:12.554 job1: (groupid=0, jobs=1): err= 0: pid=3582431: Wed May 15 19:33:38 2024 00:18:12.554 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:18:12.554 slat (nsec): min=1271, max=12902k, avg=159078.53, stdev=922357.64 00:18:12.554 clat (usec): min=6266, max=60200, avg=19523.42, stdev=10398.69 00:18:12.554 lat (usec): min=6274, max=60227, avg=19682.50, stdev=10503.22 00:18:12.554 clat percentiles (usec): 00:18:12.554 | 1.00th=[ 7504], 5.00th=[ 8586], 10.00th=[ 9765], 20.00th=[12518], 00:18:12.554 | 30.00th=[13566], 40.00th=[15401], 50.00th=[16450], 60.00th=[17957], 00:18:12.554 | 70.00th=[19268], 80.00th=[22152], 90.00th=[39060], 95.00th=[43254], 00:18:12.554 | 99.00th=[47449], 99.50th=[53216], 99.90th=[54264], 99.95th=[55313], 00:18:12.554 | 99.99th=[60031] 00:18:12.554 write: IOPS=4002, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1006msec); 0 zone resets 00:18:12.554 slat (usec): min=2, max=11590, avg=98.36, stdev=587.25 00:18:12.554 clat (usec): min=841, max=59164, avg=14213.22, stdev=7182.20 00:18:12.554 lat (usec): min=3006, max=59204, avg=14311.58, stdev=7227.52 00:18:12.554 clat percentiles (usec): 00:18:12.554 | 1.00th=[ 4817], 5.00th=[ 6652], 10.00th=[ 8225], 20.00th=[10028], 00:18:12.554 | 30.00th=[11076], 40.00th=[11600], 50.00th=[11994], 60.00th=[12780], 00:18:12.554 | 70.00th=[14877], 80.00th=[18744], 90.00th=[21890], 95.00th=[25297], 00:18:12.554 | 99.00th=[47449], 99.50th=[47449], 99.90th=[52167], 99.95th=[53216], 00:18:12.554 | 99.99th=[58983] 00:18:12.554 bw ( KiB/s): min=12288, max=18904, per=21.99%, avg=15596.00, stdev=4678.22, samples=2 00:18:12.554 iops : min= 3072, max= 4726, avg=3899.00, stdev=1169.55, samples=2 00:18:12.554 lat (usec) : 1000=0.01% 00:18:12.554 lat (msec) : 4=0.49%, 10=15.37%, 20=64.10%, 50=19.52%, 100=0.50% 00:18:12.554 cpu : usr=3.48%, sys=3.88%, ctx=337, majf=0, minf=2 00:18:12.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:12.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:12.555 issued rwts: total=3584,4027,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:12.555 job2: (groupid=0, jobs=1): err= 0: pid=3582450: Wed May 15 19:33:38 2024 00:18:12.555 read: IOPS=5441, BW=21.3MiB/s (22.3MB/s)(21.4MiB/1007msec) 00:18:12.555 slat (nsec): min=1288, max=13503k, avg=95461.75, stdev=685953.69 00:18:12.555 clat (usec): min=1765, max=48241, avg=12532.72, stdev=6492.48 00:18:12.555 lat (usec): min=1775, max=48268, avg=12628.18, stdev=6543.55 00:18:12.555 clat percentiles (usec): 00:18:12.555 | 1.00th=[ 4948], 5.00th=[ 6456], 10.00th=[ 7373], 20.00th=[ 8225], 00:18:12.555 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[10159], 60.00th=[11207], 00:18:12.555 | 70.00th=[12911], 80.00th=[16188], 90.00th=[21365], 95.00th=[26084], 00:18:12.555 | 99.00th=[36439], 99.50th=[43779], 99.90th=[44303], 99.95th=[45876], 00:18:12.555 | 99.99th=[48497] 00:18:12.555 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:18:12.555 slat (usec): min=2, max=11492, avg=74.88, stdev=451.62 00:18:12.555 clat (usec): min=393, max=44304, avg=10484.02, stdev=4335.51 00:18:12.555 lat (usec): min=517, max=44308, avg=10558.90, stdev=4354.45 00:18:12.555 clat percentiles (usec): 00:18:12.555 | 1.00th=[ 3032], 5.00th=[ 5342], 10.00th=[ 6194], 20.00th=[ 7242], 00:18:12.555 | 30.00th=[ 8291], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[10028], 00:18:12.555 | 70.00th=[12125], 80.00th=[13829], 90.00th=[15270], 95.00th=[17433], 00:18:12.555 | 99.00th=[30540], 99.50th=[31589], 99.90th=[33424], 99.95th=[36963], 00:18:12.555 | 99.99th=[44303] 00:18:12.555 bw ( KiB/s): min=20480, max=24576, per=31.77%, avg=22528.00, stdev=2896.31, samples=2 00:18:12.555 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:18:12.555 lat (usec) : 500=0.01% 00:18:12.555 lat (msec) : 2=0.16%, 4=1.18%, 10=52.09%, 20=39.87%, 50=6.70% 00:18:12.555 cpu : usr=4.17%, sys=6.16%, ctx=478, majf=0, minf=1 00:18:12.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:18:12.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:12.555 issued rwts: total=5480,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:12.555 job3: (groupid=0, jobs=1): err= 0: pid=3582458: Wed May 15 19:33:38 2024 00:18:12.555 read: IOPS=5017, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1003msec) 00:18:12.555 slat (nsec): min=1310, max=8014.8k, avg=91997.90, stdev=565373.12 00:18:12.555 clat (usec): min=1686, max=26552, avg=12014.65, stdev=3860.27 00:18:12.555 lat (usec): min=3444, max=26555, avg=12106.64, stdev=3896.29 00:18:12.555 clat percentiles (usec): 00:18:12.555 | 1.00th=[ 5800], 5.00th=[ 7046], 10.00th=[ 7177], 20.00th=[ 8094], 00:18:12.555 | 30.00th=[ 9241], 40.00th=[10552], 50.00th=[11731], 60.00th=[13042], 00:18:12.555 | 70.00th=[14091], 80.00th=[15664], 90.00th=[16712], 95.00th=[18744], 00:18:12.555 | 99.00th=[21890], 99.50th=[22414], 99.90th=[25297], 99.95th=[26608], 00:18:12.555 | 99.99th=[26608] 00:18:12.555 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:18:12.555 slat (usec): min=2, max=12163, avg=99.43, stdev=512.55 00:18:12.555 clat (usec): min=2558, max=29182, avg=13043.13, stdev=6848.72 00:18:12.555 lat (usec): min=2568, max=29192, avg=13142.56, stdev=6892.27 00:18:12.555 clat percentiles (usec): 00:18:12.555 | 1.00th=[ 3621], 5.00th=[ 5407], 10.00th=[ 5866], 20.00th=[ 6980], 00:18:12.555 | 30.00th=[ 7308], 40.00th=[ 9110], 50.00th=[11076], 60.00th=[13042], 00:18:12.555 | 70.00th=[17171], 80.00th=[20055], 90.00th=[24511], 95.00th=[26346], 00:18:12.555 | 99.00th=[28181], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:18:12.555 | 99.99th=[29230] 00:18:12.555 bw ( KiB/s): min=16384, max=24576, per=28.88%, avg=20480.00, stdev=5792.62, samples=2 00:18:12.555 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:18:12.555 lat (msec) : 2=0.01%, 4=0.93%, 10=40.21%, 20=47.56%, 50=11.29% 00:18:12.555 cpu : usr=3.09%, sys=6.19%, ctx=449, majf=0, minf=1 00:18:12.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:12.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:12.555 issued rwts: total=5033,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:12.555 00:18:12.555 Run status group 0 (all jobs): 00:18:12.555 READ: bw=65.4MiB/s (68.5MB/s), 10.7MiB/s-21.3MiB/s (11.2MB/s-22.3MB/s), io=65.8MiB (69.0MB), run=1003-1007msec 00:18:12.555 WRITE: bw=69.2MiB/s (72.6MB/s), 12.0MiB/s-21.8MiB/s (12.5MB/s-22.9MB/s), io=69.7MiB (73.1MB), run=1003-1007msec 00:18:12.555 00:18:12.555 Disk stats (read/write): 00:18:12.555 nvme0n1: ios=2220/2560, merge=0/0, ticks=26359/22179, in_queue=48538, util=87.88% 00:18:12.555 nvme0n2: ios=3110/3239, merge=0/0, ticks=26374/19107, in_queue=45481, util=87.97% 00:18:12.555 nvme0n3: ios=4464/4608, merge=0/0, ticks=39828/33843, in_queue=73671, util=95.25% 00:18:12.555 nvme0n4: ios=4268/4608, merge=0/0, ticks=37061/44278, in_queue=81339, util=99.36% 00:18:12.555 19:33:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:12.555 19:33:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3582696 00:18:12.555 19:33:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:12.555 19:33:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:12.555 [global] 00:18:12.555 thread=1 00:18:12.555 invalidate=1 00:18:12.555 rw=read 00:18:12.555 time_based=1 00:18:12.555 runtime=10 00:18:12.555 ioengine=libaio 00:18:12.555 direct=1 00:18:12.555 bs=4096 00:18:12.555 iodepth=1 00:18:12.555 norandommap=1 00:18:12.555 numjobs=1 00:18:12.555 00:18:12.555 [job0] 00:18:12.555 filename=/dev/nvme0n1 00:18:12.555 [job1] 00:18:12.555 filename=/dev/nvme0n2 00:18:12.555 [job2] 00:18:12.555 filename=/dev/nvme0n3 00:18:12.555 [job3] 00:18:12.555 filename=/dev/nvme0n4 00:18:12.555 Could not set queue depth (nvme0n1) 00:18:12.555 Could not set queue depth (nvme0n2) 00:18:12.555 Could not set queue depth (nvme0n3) 00:18:12.555 Could not set queue depth (nvme0n4) 00:18:12.818 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:12.819 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:12.819 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:12.819 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:12.819 fio-3.35 00:18:12.819 Starting 4 threads 00:18:15.356 19:33:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:15.616 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=8818688, buflen=4096 00:18:15.616 fio: pid=3582927, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:15.616 19:33:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:15.877 19:33:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:15.877 19:33:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:15.877 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=290816, buflen=4096 00:18:15.877 fio: pid=3582920, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:15.877 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=9383936, buflen=4096 00:18:15.877 fio: pid=3582902, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:15.877 19:33:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:15.877 19:33:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:16.138 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=11669504, buflen=4096 00:18:16.138 fio: pid=3582909, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:16.138 19:33:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:16.138 19:33:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:16.138 00:18:16.138 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3582902: Wed May 15 19:33:42 2024 00:18:16.138 read: IOPS=745, BW=2981KiB/s (3053kB/s)(9164KiB/3074msec) 00:18:16.138 slat (usec): min=7, max=24207, avg=49.63, stdev=654.41 00:18:16.138 clat (usec): min=755, max=42954, avg=1274.16, stdev=2022.03 00:18:16.138 lat (usec): min=781, max=42979, avg=1323.80, stdev=2125.26 00:18:16.138 clat percentiles (usec): 00:18:16.138 | 1.00th=[ 971], 5.00th=[ 1045], 10.00th=[ 1074], 20.00th=[ 1106], 00:18:16.138 | 30.00th=[ 1123], 40.00th=[ 1156], 50.00th=[ 1172], 60.00th=[ 1188], 00:18:16.138 | 70.00th=[ 1221], 80.00th=[ 1237], 90.00th=[ 1270], 95.00th=[ 1303], 00:18:16.138 | 99.00th=[ 1369], 99.50th=[ 1434], 99.90th=[41681], 99.95th=[42206], 00:18:16.138 | 99.99th=[42730] 00:18:16.138 bw ( KiB/s): min= 1920, max= 3352, per=33.42%, avg=3009.60, stdev=616.35, samples=5 00:18:16.138 iops : min= 480, max= 838, avg=752.40, stdev=154.09, samples=5 00:18:16.138 lat (usec) : 1000=2.05% 00:18:16.138 lat (msec) : 2=97.60%, 10=0.04%, 50=0.26% 00:18:16.138 cpu : usr=0.75%, sys=2.25%, ctx=2297, majf=0, minf=1 00:18:16.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.138 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.138 issued rwts: total=2292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.138 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3582909: Wed May 15 19:33:42 2024 00:18:16.138 read: IOPS=871, BW=3483KiB/s (3566kB/s)(11.1MiB/3272msec) 00:18:16.138 slat (usec): min=6, max=29843, avg=55.12, stdev=758.21 00:18:16.138 clat (usec): min=366, max=4002, avg=1083.73, stdev=259.15 00:18:16.138 lat (usec): min=385, max=30993, avg=1138.85, stdev=805.09 00:18:16.138 clat percentiles (usec): 00:18:16.138 | 1.00th=[ 469], 5.00th=[ 603], 10.00th=[ 701], 20.00th=[ 758], 00:18:16.138 | 30.00th=[ 1029], 40.00th=[ 1172], 50.00th=[ 1205], 60.00th=[ 1237], 00:18:16.138 | 70.00th=[ 1254], 80.00th=[ 1270], 90.00th=[ 1303], 95.00th=[ 1336], 00:18:16.138 | 99.00th=[ 1401], 99.50th=[ 1418], 99.90th=[ 1516], 99.95th=[ 1729], 00:18:16.138 | 99.99th=[ 4015] 00:18:16.138 bw ( KiB/s): min= 2938, max= 4792, per=39.09%, avg=3519.00, stdev=711.92, samples=6 00:18:16.138 iops : min= 734, max= 1198, avg=879.67, stdev=178.06, samples=6 00:18:16.138 lat (usec) : 500=1.16%, 750=17.68%, 1000=10.39% 00:18:16.138 lat (msec) : 2=70.70%, 10=0.04% 00:18:16.138 cpu : usr=0.67%, sys=2.69%, ctx=2857, majf=0, minf=1 00:18:16.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.138 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.138 issued rwts: total=2850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.138 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3582920: Wed May 15 19:33:42 2024 00:18:16.138 read: IOPS=25, BW=99.1KiB/s (101kB/s)(284KiB/2866msec) 00:18:16.138 slat (usec): min=8, max=238, avg=28.29, stdev=25.37 00:18:16.138 clat (usec): min=872, max=42941, avg=40038.16, stdev=8328.74 00:18:16.138 lat (usec): min=881, max=42966, avg=40066.49, stdev=8329.38 00:18:16.138 clat percentiles (usec): 00:18:16.138 | 1.00th=[ 873], 5.00th=[32637], 10.00th=[41157], 20.00th=[41681], 00:18:16.138 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:18:16.138 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:16.138 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:16.138 | 99.99th=[42730] 00:18:16.138 bw ( KiB/s): min= 96, max= 112, per=1.10%, avg=99.20, stdev= 7.16, samples=5 00:18:16.138 iops : min= 24, max= 28, avg=24.80, stdev= 1.79, samples=5 00:18:16.138 lat (usec) : 1000=1.39% 00:18:16.138 lat (msec) : 2=2.78%, 50=94.44% 00:18:16.138 cpu : usr=0.10%, sys=0.00%, ctx=73, majf=0, minf=1 00:18:16.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.138 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.138 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.138 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3582927: Wed May 15 19:33:42 2024 00:18:16.138 read: IOPS=813, BW=3252KiB/s (3330kB/s)(8612KiB/2648msec) 00:18:16.138 slat (nsec): min=6437, max=62276, avg=24193.54, stdev=5075.44 00:18:16.138 clat (usec): min=370, max=42007, avg=1184.73, stdev=3186.52 00:18:16.138 lat (usec): min=377, max=42032, avg=1208.92, stdev=3186.54 00:18:16.138 clat percentiles (usec): 00:18:16.138 | 1.00th=[ 545], 5.00th=[ 676], 10.00th=[ 709], 20.00th=[ 742], 00:18:16.138 | 30.00th=[ 799], 40.00th=[ 873], 50.00th=[ 938], 60.00th=[ 1012], 00:18:16.138 | 70.00th=[ 1074], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1188], 00:18:16.138 | 99.00th=[ 1254], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:18:16.138 | 99.99th=[42206] 00:18:16.138 bw ( KiB/s): min= 176, max= 4408, per=37.30%, avg=3358.40, stdev=1785.42, samples=5 00:18:16.138 iops : min= 44, max= 1102, avg=839.60, stdev=446.36, samples=5 00:18:16.138 lat (usec) : 500=0.60%, 750=20.80%, 1000=36.12% 00:18:16.138 lat (msec) : 2=41.74%, 4=0.05%, 20=0.05%, 50=0.60% 00:18:16.138 cpu : usr=0.91%, sys=2.23%, ctx=2154, majf=0, minf=2 00:18:16.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.138 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.138 issued rwts: total=2154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.138 00:18:16.138 Run status group 0 (all jobs): 00:18:16.138 READ: bw=9002KiB/s (9219kB/s), 99.1KiB/s-3483KiB/s (101kB/s-3566kB/s), io=28.8MiB (30.2MB), run=2648-3272msec 00:18:16.138 00:18:16.138 Disk stats (read/write): 00:18:16.138 nvme0n1: ios=2135/0, merge=0/0, ticks=2665/0, in_queue=2665, util=93.76% 00:18:16.138 nvme0n2: ios=2701/0, merge=0/0, ticks=2851/0, in_queue=2851, util=93.22% 00:18:16.138 nvme0n3: ios=70/0, merge=0/0, ticks=2802/0, in_queue=2802, util=96.37% 00:18:16.138 nvme0n4: ios=2123/0, merge=0/0, ticks=2456/0, in_queue=2456, util=96.42% 00:18:16.399 19:33:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:16.399 19:33:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:16.659 19:33:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:16.659 19:33:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:16.917 19:33:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:16.917 19:33:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:17.177 19:33:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:17.177 19:33:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:17.437 19:33:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:17.437 19:33:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3582696 00:18:17.437 19:33:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:17.437 19:33:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:17.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.437 19:33:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:17.437 19:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:18:17.437 19:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:17.437 19:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:17.437 19:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:17.437 19:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:17.437 19:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:18:17.437 19:33:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:17.437 19:33:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:17.437 nvmf hotplug test: fio failed as expected 00:18:17.437 19:33:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:17.698 19:33:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:17.698 19:33:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:17.698 19:33:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:17.698 19:33:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:17.698 19:33:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:17.698 19:33:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:17.698 19:33:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:17.698 19:33:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:17.698 19:33:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:17.698 19:33:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:17.698 19:33:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:17.698 rmmod nvme_tcp 00:18:17.698 rmmod nvme_fabrics 00:18:17.698 rmmod nvme_keyring 00:18:17.698 19:33:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:17.698 19:33:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:17.699 19:33:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:17.699 19:33:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3579155 ']' 00:18:17.699 19:33:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3579155 00:18:17.699 19:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 3579155 ']' 00:18:17.699 19:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 3579155 00:18:17.699 19:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:18:17.699 19:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:17.699 19:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3579155 00:18:17.699 19:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:17.699 19:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:17.699 19:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3579155' 00:18:17.699 killing process with pid 3579155 00:18:17.699 19:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 3579155 00:18:17.699 [2024-05-15 19:33:43.815488] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:17.699 19:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 3579155 00:18:17.960 19:33:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:17.960 19:33:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:17.960 19:33:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:17.960 19:33:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.960 19:33:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:17.960 19:33:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.960 19:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.960 19:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.873 19:33:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:19.873 00:18:19.873 real 0m30.366s 00:18:19.873 user 2m38.037s 00:18:19.873 sys 0m9.827s 00:18:19.873 19:33:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:19.873 19:33:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.873 ************************************ 00:18:19.873 END TEST nvmf_fio_target 00:18:19.873 ************************************ 00:18:20.135 19:33:46 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:20.135 19:33:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:20.135 19:33:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:20.135 19:33:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:20.135 ************************************ 00:18:20.135 START TEST nvmf_bdevio 00:18:20.135 ************************************ 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:20.135 * Looking for test storage... 00:18:20.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:20.135 19:33:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:28.280 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:28.280 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:28.280 Found net devices under 0000:31:00.0: cvl_0_0 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:28.280 Found net devices under 0000:31:00.1: cvl_0_1 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:28.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.539 ms 00:18:28.280 00:18:28.280 --- 10.0.0.2 ping statistics --- 00:18:28.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.280 rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms 00:18:28.280 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.447 ms 00:18:28.280 00:18:28.280 --- 10.0.0.1 ping statistics --- 00:18:28.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.281 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3588591 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3588591 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 3588591 ']' 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:28.281 19:33:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:28.542 [2024-05-15 19:33:54.491797] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:18:28.542 [2024-05-15 19:33:54.491864] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.542 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.542 [2024-05-15 19:33:54.593063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:28.542 [2024-05-15 19:33:54.685228] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.542 [2024-05-15 19:33:54.685287] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.542 [2024-05-15 19:33:54.685295] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.542 [2024-05-15 19:33:54.685301] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.542 [2024-05-15 19:33:54.685321] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.542 [2024-05-15 19:33:54.685522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:28.542 [2024-05-15 19:33:54.685750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:28.542 [2024-05-15 19:33:54.685922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:28.542 [2024-05-15 19:33:54.685925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:29.484 [2024-05-15 19:33:55.420809] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:29.484 Malloc0 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:29.484 [2024-05-15 19:33:55.473429] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:29.484 [2024-05-15 19:33:55.473737] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:29.484 { 00:18:29.484 "params": { 00:18:29.484 "name": "Nvme$subsystem", 00:18:29.484 "trtype": "$TEST_TRANSPORT", 00:18:29.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:29.484 "adrfam": "ipv4", 00:18:29.484 "trsvcid": "$NVMF_PORT", 00:18:29.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:29.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:29.484 "hdgst": ${hdgst:-false}, 00:18:29.484 "ddgst": ${ddgst:-false} 00:18:29.484 }, 00:18:29.484 "method": "bdev_nvme_attach_controller" 00:18:29.484 } 00:18:29.484 EOF 00:18:29.484 )") 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:29.484 19:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:29.484 "params": { 00:18:29.484 "name": "Nvme1", 00:18:29.484 "trtype": "tcp", 00:18:29.484 "traddr": "10.0.0.2", 00:18:29.484 "adrfam": "ipv4", 00:18:29.484 "trsvcid": "4420", 00:18:29.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.484 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:29.484 "hdgst": false, 00:18:29.484 "ddgst": false 00:18:29.484 }, 00:18:29.484 "method": "bdev_nvme_attach_controller" 00:18:29.484 }' 00:18:29.484 [2024-05-15 19:33:55.528583] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:18:29.485 [2024-05-15 19:33:55.528648] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3588939 ] 00:18:29.485 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.485 [2024-05-15 19:33:55.618118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:29.744 [2024-05-15 19:33:55.716273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.744 [2024-05-15 19:33:55.716423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.744 [2024-05-15 19:33:55.716593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.744 I/O targets: 00:18:29.744 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:29.744 00:18:29.744 00:18:29.744 CUnit - A unit testing framework for C - Version 2.1-3 00:18:29.744 http://cunit.sourceforge.net/ 00:18:29.744 00:18:29.744 00:18:29.744 Suite: bdevio tests on: Nvme1n1 00:18:29.744 Test: blockdev write read block ...passed 00:18:30.003 Test: blockdev write zeroes read block ...passed 00:18:30.003 Test: blockdev write zeroes read no split ...passed 00:18:30.003 Test: blockdev write zeroes read split ...passed 00:18:30.003 Test: blockdev write zeroes read split partial ...passed 00:18:30.003 Test: blockdev reset ...[2024-05-15 19:33:56.086229] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:30.003 [2024-05-15 19:33:56.086297] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d1930 (9): Bad file descriptor 00:18:30.003 [2024-05-15 19:33:56.146067] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:30.003 passed 00:18:30.263 Test: blockdev write read 8 blocks ...passed 00:18:30.263 Test: blockdev write read size > 128k ...passed 00:18:30.263 Test: blockdev write read invalid size ...passed 00:18:30.263 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:30.263 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:30.263 Test: blockdev write read max offset ...passed 00:18:30.263 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:30.263 Test: blockdev writev readv 8 blocks ...passed 00:18:30.263 Test: blockdev writev readv 30 x 1block ...passed 00:18:30.263 Test: blockdev writev readv block ...passed 00:18:30.263 Test: blockdev writev readv size > 128k ...passed 00:18:30.263 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:30.263 Test: blockdev comparev and writev ...[2024-05-15 19:33:56.413491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.263 [2024-05-15 19:33:56.413518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.263 [2024-05-15 19:33:56.413529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.263 [2024-05-15 19:33:56.413535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:30.263 [2024-05-15 19:33:56.413957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.263 [2024-05-15 19:33:56.413966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:30.263 [2024-05-15 19:33:56.413980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.263 [2024-05-15 19:33:56.413986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:30.263 [2024-05-15 19:33:56.414414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.263 [2024-05-15 19:33:56.414423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:30.263 [2024-05-15 19:33:56.414432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.263 [2024-05-15 19:33:56.414437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:30.263 [2024-05-15 19:33:56.414977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.263 [2024-05-15 19:33:56.414986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:30.263 [2024-05-15 19:33:56.414995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.263 [2024-05-15 19:33:56.415000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:30.525 passed 00:18:30.525 Test: blockdev nvme passthru rw ...passed 00:18:30.525 Test: blockdev nvme passthru vendor specific ...[2024-05-15 19:33:56.500012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:30.525 [2024-05-15 19:33:56.500024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:30.525 [2024-05-15 19:33:56.500417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:30.525 [2024-05-15 19:33:56.500426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:30.525 [2024-05-15 19:33:56.500824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:30.525 [2024-05-15 19:33:56.500832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:30.525 [2024-05-15 19:33:56.501238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:30.525 [2024-05-15 19:33:56.501246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:30.525 passed 00:18:30.525 Test: blockdev nvme admin passthru ...passed 00:18:30.525 Test: blockdev copy ...passed 00:18:30.525 00:18:30.525 Run Summary: Type Total Ran Passed Failed Inactive 00:18:30.525 suites 1 1 n/a 0 0 00:18:30.525 tests 23 23 23 0 0 00:18:30.525 asserts 152 152 152 0 n/a 00:18:30.525 00:18:30.525 Elapsed time = 1.358 seconds 00:18:30.525 19:33:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:30.525 19:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.525 19:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:30.525 19:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.525 19:33:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:30.525 19:33:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:30.525 19:33:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:30.525 19:33:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:30.525 19:33:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:30.525 19:33:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:30.525 19:33:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:30.525 19:33:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:30.525 rmmod nvme_tcp 00:18:30.785 rmmod nvme_fabrics 00:18:30.785 rmmod nvme_keyring 00:18:30.785 19:33:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:30.785 19:33:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:30.785 19:33:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:30.785 19:33:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3588591 ']' 00:18:30.785 19:33:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3588591 00:18:30.785 19:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 3588591 ']' 00:18:30.785 19:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 3588591 00:18:30.785 19:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:18:30.785 19:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:30.785 19:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3588591 00:18:30.785 19:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:18:30.785 19:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:18:30.785 19:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3588591' 00:18:30.785 killing process with pid 3588591 00:18:30.785 19:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 3588591 00:18:30.785 [2024-05-15 19:33:56.821883] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:30.785 19:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 3588591 00:18:31.046 19:33:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:31.046 19:33:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:31.046 19:33:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:31.046 19:33:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:31.046 19:33:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:31.046 19:33:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.046 19:33:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.046 19:33:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.956 19:33:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:32.956 00:18:32.956 real 0m12.983s 00:18:32.956 user 0m13.650s 00:18:32.956 sys 0m6.798s 00:18:32.956 19:33:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:32.956 19:33:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:32.956 ************************************ 00:18:32.956 END TEST nvmf_bdevio 00:18:32.956 ************************************ 00:18:33.217 19:33:59 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:33.217 19:33:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:33.217 19:33:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:33.217 19:33:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:33.217 ************************************ 00:18:33.217 START TEST nvmf_auth_target 00:18:33.217 ************************************ 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:33.217 * Looking for test storage... 00:18:33.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.217 19:33:59 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:33.218 19:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:41.377 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:41.377 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:41.377 Found net devices under 0000:31:00.0: cvl_0_0 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:41.377 Found net devices under 0000:31:00.1: cvl_0_1 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:41.377 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:41.639 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:41.639 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:41.640 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:41.640 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:41.640 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:41.640 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:41.640 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:41.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:41.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:18:41.640 00:18:41.640 --- 10.0.0.2 ping statistics --- 00:18:41.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.640 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:18:41.640 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:41.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:41.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.457 ms 00:18:41.640 00:18:41.640 --- 10.0.0.1 ping statistics --- 00:18:41.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.640 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:18:41.640 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:41.640 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:41.640 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:41.640 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:41.640 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:41.640 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:41.640 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:41.640 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:41.640 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:41.901 19:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:18:41.901 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:41.901 19:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:41.901 19:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.901 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3593954 00:18:41.901 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3593954 00:18:41.901 19:34:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:41.901 19:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3593954 ']' 00:18:41.901 19:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.901 19:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:41.901 19:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.901 19:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:41.901 19:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=3594005 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ea93ac2c9714e8c8d7aa6cd0c591207d760b362c04a3993b 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9FN 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ea93ac2c9714e8c8d7aa6cd0c591207d760b362c04a3993b 0 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ea93ac2c9714e8c8d7aa6cd0c591207d760b362c04a3993b 0 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ea93ac2c9714e8c8d7aa6cd0c591207d760b362c04a3993b 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9FN 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9FN 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.9FN 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d05d38b4cb72861869133dac2e6f0e35 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.EMt 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d05d38b4cb72861869133dac2e6f0e35 1 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d05d38b4cb72861869133dac2e6f0e35 1 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d05d38b4cb72861869133dac2e6f0e35 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.EMt 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.EMt 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.EMt 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:42.845 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=eb2f8764983a4d29b66df46430b75b131255cbfbcee75971 00:18:42.846 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:42.846 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.TJe 00:18:42.846 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key eb2f8764983a4d29b66df46430b75b131255cbfbcee75971 2 00:18:42.846 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 eb2f8764983a4d29b66df46430b75b131255cbfbcee75971 2 00:18:42.846 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:42.846 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:42.846 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=eb2f8764983a4d29b66df46430b75b131255cbfbcee75971 00:18:42.846 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:42.846 19:34:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:42.846 19:34:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.TJe 00:18:43.107 19:34:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.TJe 00:18:43.107 19:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.TJe 00:18:43.107 19:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:43.107 19:34:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a242af3b131756166cb92c1a4702b3e029cd97c41305460bb6d6b66f6504f22f 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.EPJ 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a242af3b131756166cb92c1a4702b3e029cd97c41305460bb6d6b66f6504f22f 3 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a242af3b131756166cb92c1a4702b3e029cd97c41305460bb6d6b66f6504f22f 3 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a242af3b131756166cb92c1a4702b3e029cd97c41305460bb6d6b66f6504f22f 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.EPJ 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.EPJ 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.EPJ 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 3593954 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3593954 ']' 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:43.108 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.368 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:43.368 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:43.368 19:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 3594005 /var/tmp/host.sock 00:18:43.368 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3594005 ']' 00:18:43.368 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:18:43.368 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:43.368 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:43.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:43.368 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:43.368 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.368 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:43.368 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:43.368 19:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:18:43.368 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.368 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.368 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.368 19:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:43.368 19:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.9FN 00:18:43.368 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.368 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.629 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.629 19:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.9FN 00:18:43.629 19:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.9FN 00:18:43.629 19:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:43.629 19:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.EMt 00:18:43.629 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.629 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.629 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.629 19:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.EMt 00:18:43.629 19:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.EMt 00:18:43.890 19:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:43.890 19:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.TJe 00:18:43.890 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.890 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.890 19:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.890 19:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.TJe 00:18:43.890 19:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.TJe 00:18:44.152 19:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:44.152 19:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.EPJ 00:18:44.152 19:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.152 19:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.152 19:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.152 19:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.EPJ 00:18:44.152 19:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.EPJ 00:18:44.412 19:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:18:44.413 19:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.413 19:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:44.413 19:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:44.413 19:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:44.413 19:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:18:44.413 19:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:44.413 19:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:44.413 19:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:44.413 19:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:44.413 19:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:18:44.413 19:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.413 19:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.413 19:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.413 19:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:44.413 19:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:44.673 00:18:44.934 19:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:44.934 19:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:44.935 19:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.935 19:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.935 19:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.935 19:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.935 19:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.935 19:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.935 19:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:44.935 { 00:18:44.935 "cntlid": 1, 00:18:44.935 "qid": 0, 00:18:44.935 "state": "enabled", 00:18:44.935 "listen_address": { 00:18:44.935 "trtype": "TCP", 00:18:44.935 "adrfam": "IPv4", 00:18:44.935 "traddr": "10.0.0.2", 00:18:44.935 "trsvcid": "4420" 00:18:44.935 }, 00:18:44.935 "peer_address": { 00:18:44.935 "trtype": "TCP", 00:18:44.935 "adrfam": "IPv4", 00:18:44.935 "traddr": "10.0.0.1", 00:18:44.935 "trsvcid": "45940" 00:18:44.935 }, 00:18:44.935 "auth": { 00:18:44.935 "state": "completed", 00:18:44.935 "digest": "sha256", 00:18:44.935 "dhgroup": "null" 00:18:44.935 } 00:18:44.935 } 00:18:44.935 ]' 00:18:44.935 19:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:45.195 19:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.195 19:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:45.195 19:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:45.195 19:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:45.195 19:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.195 19:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.195 19:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.457 19:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZWE5M2FjMmM5NzE0ZThjOGQ3YWE2Y2QwYzU5MTIwN2Q3NjBiMzYyYzA0YTM5OTNiBPoO1A==: 00:18:46.030 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.030 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:46.030 19:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.030 19:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.292 19:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.292 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:46.292 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:46.292 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:46.292 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:18:46.292 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:46.292 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:46.292 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:46.292 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:46.292 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:18:46.292 19:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.292 19:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.292 19:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.292 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:46.292 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:46.553 00:18:46.553 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:46.553 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:46.553 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.814 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.814 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.814 19:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.814 19:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.814 19:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.814 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:46.814 { 00:18:46.814 "cntlid": 3, 00:18:46.814 "qid": 0, 00:18:46.814 "state": "enabled", 00:18:46.814 "listen_address": { 00:18:46.814 "trtype": "TCP", 00:18:46.814 "adrfam": "IPv4", 00:18:46.814 "traddr": "10.0.0.2", 00:18:46.814 "trsvcid": "4420" 00:18:46.814 }, 00:18:46.814 "peer_address": { 00:18:46.814 "trtype": "TCP", 00:18:46.814 "adrfam": "IPv4", 00:18:46.814 "traddr": "10.0.0.1", 00:18:46.814 "trsvcid": "45958" 00:18:46.814 }, 00:18:46.814 "auth": { 00:18:46.814 "state": "completed", 00:18:46.814 "digest": "sha256", 00:18:46.814 "dhgroup": "null" 00:18:46.814 } 00:18:46.814 } 00:18:46.814 ]' 00:18:46.814 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:46.814 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:46.814 19:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:47.073 19:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:47.073 19:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:47.073 19:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.073 19:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.073 19:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.334 19:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDA1ZDM4YjRjYjcyODYxODY5MTMzZGFjMmU2ZjBlMzWEOGg3: 00:18:47.907 19:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.907 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:47.907 19:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.907 19:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.907 19:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.907 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:47.907 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:47.907 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:48.168 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:18:48.168 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:48.168 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:48.168 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:48.168 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:48.168 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:18:48.168 19:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.168 19:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.168 19:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.168 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:48.168 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:48.461 00:18:48.461 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:48.461 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:48.461 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.763 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.763 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.763 19:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.763 19:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.763 19:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.763 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:48.763 { 00:18:48.763 "cntlid": 5, 00:18:48.763 "qid": 0, 00:18:48.763 "state": "enabled", 00:18:48.763 "listen_address": { 00:18:48.763 "trtype": "TCP", 00:18:48.763 "adrfam": "IPv4", 00:18:48.763 "traddr": "10.0.0.2", 00:18:48.763 "trsvcid": "4420" 00:18:48.763 }, 00:18:48.763 "peer_address": { 00:18:48.763 "trtype": "TCP", 00:18:48.763 "adrfam": "IPv4", 00:18:48.763 "traddr": "10.0.0.1", 00:18:48.763 "trsvcid": "45992" 00:18:48.763 }, 00:18:48.763 "auth": { 00:18:48.763 "state": "completed", 00:18:48.763 "digest": "sha256", 00:18:48.763 "dhgroup": "null" 00:18:48.763 } 00:18:48.763 } 00:18:48.763 ]' 00:18:48.763 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:48.763 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.763 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:48.763 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:48.763 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:48.763 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.763 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.763 19:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.030 19:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZWIyZjg3NjQ5ODNhNGQyOWI2NmRmNDY0MzBiNzViMTMxMjU1Y2JmYmNlZTc1OTcxvm1qIQ==: 00:18:49.968 19:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.968 19:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:49.968 19:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.968 19:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.968 19:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.968 19:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:49.968 19:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:49.969 19:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:49.969 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:18:49.969 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:49.969 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:49.969 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:49.969 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:49.969 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:49.969 19:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.969 19:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.969 19:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.969 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.969 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:50.355 00:18:50.355 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:50.355 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.355 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:50.355 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.355 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.355 19:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.355 19:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.614 19:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.614 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:50.614 { 00:18:50.614 "cntlid": 7, 00:18:50.614 "qid": 0, 00:18:50.614 "state": "enabled", 00:18:50.614 "listen_address": { 00:18:50.614 "trtype": "TCP", 00:18:50.614 "adrfam": "IPv4", 00:18:50.614 "traddr": "10.0.0.2", 00:18:50.614 "trsvcid": "4420" 00:18:50.614 }, 00:18:50.614 "peer_address": { 00:18:50.614 "trtype": "TCP", 00:18:50.614 "adrfam": "IPv4", 00:18:50.614 "traddr": "10.0.0.1", 00:18:50.614 "trsvcid": "46026" 00:18:50.614 }, 00:18:50.614 "auth": { 00:18:50.614 "state": "completed", 00:18:50.614 "digest": "sha256", 00:18:50.614 "dhgroup": "null" 00:18:50.614 } 00:18:50.614 } 00:18:50.614 ]' 00:18:50.614 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:50.614 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:50.614 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:50.614 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:50.614 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:50.614 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.614 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.615 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.874 19:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:YTI0MmFmM2IxMzE3NTYxNjZjYjkyYzFhNDcwMmIzZTAyOWNkOTdjNDEzMDU0NjBiYjZkNmI2NmY2NTA0ZjIyZq7CFVo=: 00:18:51.442 19:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.703 19:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:51.703 19:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.703 19:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.703 19:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.703 19:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.703 19:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:51.703 19:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.703 19:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.703 19:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:18:51.703 19:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:51.703 19:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:51.703 19:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:51.703 19:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:51.703 19:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:18:51.703 19:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.703 19:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.963 19:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.963 19:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:51.963 19:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:52.224 00:18:52.224 19:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:52.224 19:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:52.224 19:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.224 19:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.224 19:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.224 19:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.224 19:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.224 19:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.224 19:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:52.224 { 00:18:52.224 "cntlid": 9, 00:18:52.224 "qid": 0, 00:18:52.224 "state": "enabled", 00:18:52.224 "listen_address": { 00:18:52.224 "trtype": "TCP", 00:18:52.224 "adrfam": "IPv4", 00:18:52.224 "traddr": "10.0.0.2", 00:18:52.224 "trsvcid": "4420" 00:18:52.224 }, 00:18:52.224 "peer_address": { 00:18:52.224 "trtype": "TCP", 00:18:52.224 "adrfam": "IPv4", 00:18:52.224 "traddr": "10.0.0.1", 00:18:52.224 "trsvcid": "46040" 00:18:52.224 }, 00:18:52.224 "auth": { 00:18:52.224 "state": "completed", 00:18:52.224 "digest": "sha256", 00:18:52.224 "dhgroup": "ffdhe2048" 00:18:52.224 } 00:18:52.224 } 00:18:52.224 ]' 00:18:52.224 19:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:52.485 19:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.485 19:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:52.485 19:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:52.485 19:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:52.485 19:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.485 19:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.485 19:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.746 19:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZWE5M2FjMmM5NzE0ZThjOGQ3YWE2Y2QwYzU5MTIwN2Q3NjBiMzYyYzA0YTM5OTNiBPoO1A==: 00:18:53.315 19:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.315 19:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:53.315 19:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.315 19:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.316 19:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.316 19:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:53.316 19:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:53.316 19:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:53.576 19:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:18:53.576 19:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:53.576 19:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.576 19:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:53.576 19:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:53.576 19:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:18:53.576 19:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.576 19:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.576 19:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.576 19:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:53.576 19:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:53.838 00:18:53.838 19:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:53.838 19:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:53.838 19:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.099 19:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.099 19:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.099 19:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.099 19:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.099 19:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.099 19:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:54.099 { 00:18:54.099 "cntlid": 11, 00:18:54.099 "qid": 0, 00:18:54.099 "state": "enabled", 00:18:54.099 "listen_address": { 00:18:54.099 "trtype": "TCP", 00:18:54.099 "adrfam": "IPv4", 00:18:54.099 "traddr": "10.0.0.2", 00:18:54.099 "trsvcid": "4420" 00:18:54.099 }, 00:18:54.099 "peer_address": { 00:18:54.099 "trtype": "TCP", 00:18:54.099 "adrfam": "IPv4", 00:18:54.099 "traddr": "10.0.0.1", 00:18:54.099 "trsvcid": "37972" 00:18:54.099 }, 00:18:54.099 "auth": { 00:18:54.099 "state": "completed", 00:18:54.099 "digest": "sha256", 00:18:54.099 "dhgroup": "ffdhe2048" 00:18:54.099 } 00:18:54.099 } 00:18:54.099 ]' 00:18:54.099 19:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:54.099 19:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.099 19:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:54.359 19:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:54.359 19:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:54.359 19:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.359 19:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.359 19:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.630 19:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDA1ZDM4YjRjYjcyODYxODY5MTMzZGFjMmU2ZjBlMzWEOGg3: 00:18:55.212 19:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.212 19:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:55.212 19:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.212 19:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.212 19:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.212 19:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:55.212 19:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:55.212 19:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:55.472 19:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:18:55.472 19:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:55.472 19:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:55.472 19:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:55.472 19:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:55.472 19:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:18:55.472 19:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.472 19:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.472 19:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.472 19:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:55.472 19:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:55.733 00:18:55.733 19:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:55.733 19:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:55.733 19:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.994 19:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.994 19:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.994 19:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.994 19:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.994 19:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.994 19:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:55.994 { 00:18:55.994 "cntlid": 13, 00:18:55.994 "qid": 0, 00:18:55.994 "state": "enabled", 00:18:55.994 "listen_address": { 00:18:55.994 "trtype": "TCP", 00:18:55.994 "adrfam": "IPv4", 00:18:55.994 "traddr": "10.0.0.2", 00:18:55.994 "trsvcid": "4420" 00:18:55.994 }, 00:18:55.994 "peer_address": { 00:18:55.994 "trtype": "TCP", 00:18:55.994 "adrfam": "IPv4", 00:18:55.994 "traddr": "10.0.0.1", 00:18:55.994 "trsvcid": "37990" 00:18:55.994 }, 00:18:55.994 "auth": { 00:18:55.994 "state": "completed", 00:18:55.994 "digest": "sha256", 00:18:55.994 "dhgroup": "ffdhe2048" 00:18:55.994 } 00:18:55.994 } 00:18:55.994 ]' 00:18:55.994 19:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:55.994 19:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.994 19:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:55.994 19:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:55.994 19:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:55.994 19:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.994 19:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.994 19:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.254 19:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZWIyZjg3NjQ5ODNhNGQyOWI2NmRmNDY0MzBiNzViMTMxMjU1Y2JmYmNlZTc1OTcxvm1qIQ==: 00:18:57.196 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.196 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:57.196 19:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.196 19:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.196 19:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.196 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:57.196 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:57.196 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:57.196 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:18:57.196 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:57.196 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:57.196 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:57.196 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:57.196 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:57.196 19:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.196 19:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.196 19:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.196 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:57.196 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:57.456 00:18:57.456 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:57.456 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:57.456 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.717 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.717 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.717 19:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.717 19:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.717 19:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.717 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:57.717 { 00:18:57.717 "cntlid": 15, 00:18:57.717 "qid": 0, 00:18:57.717 "state": "enabled", 00:18:57.717 "listen_address": { 00:18:57.717 "trtype": "TCP", 00:18:57.717 "adrfam": "IPv4", 00:18:57.717 "traddr": "10.0.0.2", 00:18:57.717 "trsvcid": "4420" 00:18:57.717 }, 00:18:57.717 "peer_address": { 00:18:57.717 "trtype": "TCP", 00:18:57.717 "adrfam": "IPv4", 00:18:57.717 "traddr": "10.0.0.1", 00:18:57.717 "trsvcid": "38016" 00:18:57.717 }, 00:18:57.717 "auth": { 00:18:57.717 "state": "completed", 00:18:57.717 "digest": "sha256", 00:18:57.717 "dhgroup": "ffdhe2048" 00:18:57.717 } 00:18:57.717 } 00:18:57.717 ]' 00:18:57.717 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:57.717 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.717 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:57.977 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:57.977 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:57.977 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.977 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.977 19:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.238 19:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:YTI0MmFmM2IxMzE3NTYxNjZjYjkyYzFhNDcwMmIzZTAyOWNkOTdjNDEzMDU0NjBiYjZkNmI2NmY2NTA0ZjIyZq7CFVo=: 00:18:58.810 19:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.810 19:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:58.810 19:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.810 19:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.810 19:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.810 19:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.810 19:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:58.810 19:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:58.810 19:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:59.070 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:18:59.070 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:59.070 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:59.070 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:59.070 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:59.070 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:18:59.070 19:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.070 19:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.070 19:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.070 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:59.070 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:59.330 00:18:59.330 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:59.330 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:59.330 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.591 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.591 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.591 19:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.591 19:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.591 19:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.591 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:59.591 { 00:18:59.591 "cntlid": 17, 00:18:59.591 "qid": 0, 00:18:59.591 "state": "enabled", 00:18:59.591 "listen_address": { 00:18:59.591 "trtype": "TCP", 00:18:59.591 "adrfam": "IPv4", 00:18:59.591 "traddr": "10.0.0.2", 00:18:59.591 "trsvcid": "4420" 00:18:59.591 }, 00:18:59.591 "peer_address": { 00:18:59.591 "trtype": "TCP", 00:18:59.591 "adrfam": "IPv4", 00:18:59.591 "traddr": "10.0.0.1", 00:18:59.591 "trsvcid": "38054" 00:18:59.591 }, 00:18:59.591 "auth": { 00:18:59.591 "state": "completed", 00:18:59.591 "digest": "sha256", 00:18:59.591 "dhgroup": "ffdhe3072" 00:18:59.591 } 00:18:59.591 } 00:18:59.591 ]' 00:18:59.591 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:59.591 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.591 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:59.591 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:59.591 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:59.851 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.851 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.851 19:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.851 19:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZWE5M2FjMmM5NzE0ZThjOGQ3YWE2Y2QwYzU5MTIwN2Q3NjBiMzYyYzA0YTM5OTNiBPoO1A==: 00:19:00.793 19:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.793 19:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:00.793 19:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.793 19:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.793 19:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.793 19:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:00.793 19:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:00.793 19:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:01.054 19:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:19:01.054 19:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:01.054 19:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:01.054 19:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:01.054 19:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:01.054 19:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:01.054 19:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.054 19:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.054 19:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.054 19:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:01.054 19:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:01.313 00:19:01.313 19:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:01.313 19:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:01.313 19:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.573 19:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.573 19:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.573 19:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.573 19:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.573 19:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.573 19:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:01.573 { 00:19:01.573 "cntlid": 19, 00:19:01.573 "qid": 0, 00:19:01.573 "state": "enabled", 00:19:01.573 "listen_address": { 00:19:01.573 "trtype": "TCP", 00:19:01.573 "adrfam": "IPv4", 00:19:01.573 "traddr": "10.0.0.2", 00:19:01.573 "trsvcid": "4420" 00:19:01.573 }, 00:19:01.573 "peer_address": { 00:19:01.573 "trtype": "TCP", 00:19:01.573 "adrfam": "IPv4", 00:19:01.573 "traddr": "10.0.0.1", 00:19:01.573 "trsvcid": "38092" 00:19:01.573 }, 00:19:01.573 "auth": { 00:19:01.573 "state": "completed", 00:19:01.573 "digest": "sha256", 00:19:01.573 "dhgroup": "ffdhe3072" 00:19:01.573 } 00:19:01.573 } 00:19:01.573 ]' 00:19:01.573 19:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:01.573 19:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.573 19:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:01.573 19:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:01.573 19:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:01.573 19:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.573 19:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.573 19:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.833 19:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDA1ZDM4YjRjYjcyODYxODY5MTMzZGFjMmU2ZjBlMzWEOGg3: 00:19:02.773 19:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.773 19:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:02.773 19:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.773 19:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.773 19:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.773 19:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:02.773 19:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:02.773 19:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:02.773 19:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:19:02.773 19:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:02.773 19:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:02.773 19:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:02.773 19:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:02.773 19:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:19:02.773 19:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.773 19:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.773 19:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.773 19:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:02.773 19:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:03.034 00:19:03.034 19:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:03.034 19:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:03.034 19:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.295 19:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.295 19:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.295 19:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.295 19:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.295 19:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.295 19:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:03.295 { 00:19:03.295 "cntlid": 21, 00:19:03.295 "qid": 0, 00:19:03.295 "state": "enabled", 00:19:03.295 "listen_address": { 00:19:03.295 "trtype": "TCP", 00:19:03.295 "adrfam": "IPv4", 00:19:03.295 "traddr": "10.0.0.2", 00:19:03.295 "trsvcid": "4420" 00:19:03.295 }, 00:19:03.295 "peer_address": { 00:19:03.295 "trtype": "TCP", 00:19:03.295 "adrfam": "IPv4", 00:19:03.295 "traddr": "10.0.0.1", 00:19:03.295 "trsvcid": "38114" 00:19:03.295 }, 00:19:03.295 "auth": { 00:19:03.295 "state": "completed", 00:19:03.295 "digest": "sha256", 00:19:03.295 "dhgroup": "ffdhe3072" 00:19:03.295 } 00:19:03.295 } 00:19:03.295 ]' 00:19:03.295 19:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:03.295 19:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.295 19:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:03.295 19:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:03.295 19:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:03.556 19:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.556 19:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.556 19:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.556 19:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZWIyZjg3NjQ5ODNhNGQyOWI2NmRmNDY0MzBiNzViMTMxMjU1Y2JmYmNlZTc1OTcxvm1qIQ==: 00:19:04.496 19:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.496 19:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:04.496 19:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.496 19:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.496 19:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.496 19:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:04.496 19:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:04.496 19:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:04.496 19:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:19:04.496 19:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:04.496 19:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.496 19:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:04.496 19:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:04.496 19:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:04.496 19:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.496 19:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.496 19:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.496 19:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.496 19:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.066 00:19:05.066 19:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:05.066 19:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.066 19:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:05.066 19:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.066 19:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.066 19:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.066 19:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.066 19:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.066 19:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:05.066 { 00:19:05.066 "cntlid": 23, 00:19:05.066 "qid": 0, 00:19:05.066 "state": "enabled", 00:19:05.066 "listen_address": { 00:19:05.066 "trtype": "TCP", 00:19:05.066 "adrfam": "IPv4", 00:19:05.066 "traddr": "10.0.0.2", 00:19:05.066 "trsvcid": "4420" 00:19:05.066 }, 00:19:05.066 "peer_address": { 00:19:05.066 "trtype": "TCP", 00:19:05.066 "adrfam": "IPv4", 00:19:05.066 "traddr": "10.0.0.1", 00:19:05.066 "trsvcid": "36390" 00:19:05.066 }, 00:19:05.066 "auth": { 00:19:05.066 "state": "completed", 00:19:05.067 "digest": "sha256", 00:19:05.067 "dhgroup": "ffdhe3072" 00:19:05.067 } 00:19:05.067 } 00:19:05.067 ]' 00:19:05.067 19:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:05.067 19:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.067 19:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:05.327 19:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:05.327 19:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:05.327 19:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.327 19:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.327 19:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.587 19:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:YTI0MmFmM2IxMzE3NTYxNjZjYjkyYzFhNDcwMmIzZTAyOWNkOTdjNDEzMDU0NjBiYjZkNmI2NmY2NTA0ZjIyZq7CFVo=: 00:19:06.157 19:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.157 19:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:06.157 19:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.157 19:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.157 19:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.157 19:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.157 19:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:06.157 19:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:06.157 19:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:06.417 19:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:19:06.417 19:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:06.417 19:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:06.417 19:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:06.417 19:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:06.417 19:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:19:06.417 19:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.417 19:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.417 19:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.417 19:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:06.417 19:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:06.677 00:19:06.677 19:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:06.677 19:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.677 19:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:06.936 19:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.936 19:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.936 19:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.936 19:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.936 19:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.936 19:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:06.936 { 00:19:06.936 "cntlid": 25, 00:19:06.936 "qid": 0, 00:19:06.936 "state": "enabled", 00:19:06.936 "listen_address": { 00:19:06.936 "trtype": "TCP", 00:19:06.936 "adrfam": "IPv4", 00:19:06.936 "traddr": "10.0.0.2", 00:19:06.936 "trsvcid": "4420" 00:19:06.936 }, 00:19:06.936 "peer_address": { 00:19:06.936 "trtype": "TCP", 00:19:06.936 "adrfam": "IPv4", 00:19:06.936 "traddr": "10.0.0.1", 00:19:06.936 "trsvcid": "36422" 00:19:06.936 }, 00:19:06.936 "auth": { 00:19:06.936 "state": "completed", 00:19:06.936 "digest": "sha256", 00:19:06.936 "dhgroup": "ffdhe4096" 00:19:06.936 } 00:19:06.936 } 00:19:06.936 ]' 00:19:06.936 19:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:07.195 19:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.195 19:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:07.195 19:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:07.195 19:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:07.195 19:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.195 19:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.195 19:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.454 19:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZWE5M2FjMmM5NzE0ZThjOGQ3YWE2Y2QwYzU5MTIwN2Q3NjBiMzYyYzA0YTM5OTNiBPoO1A==: 00:19:08.024 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.024 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:08.024 19:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.024 19:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.024 19:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.024 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:08.024 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:08.024 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:08.285 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:19:08.285 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:08.285 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:08.285 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:08.285 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:08.285 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:08.285 19:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.285 19:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.285 19:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.285 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:08.285 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:08.547 00:19:08.547 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:08.547 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:08.547 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.841 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.841 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.841 19:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.841 19:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.841 19:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.841 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:08.841 { 00:19:08.841 "cntlid": 27, 00:19:08.841 "qid": 0, 00:19:08.841 "state": "enabled", 00:19:08.841 "listen_address": { 00:19:08.841 "trtype": "TCP", 00:19:08.841 "adrfam": "IPv4", 00:19:08.841 "traddr": "10.0.0.2", 00:19:08.841 "trsvcid": "4420" 00:19:08.841 }, 00:19:08.841 "peer_address": { 00:19:08.841 "trtype": "TCP", 00:19:08.841 "adrfam": "IPv4", 00:19:08.841 "traddr": "10.0.0.1", 00:19:08.841 "trsvcid": "36440" 00:19:08.841 }, 00:19:08.841 "auth": { 00:19:08.841 "state": "completed", 00:19:08.841 "digest": "sha256", 00:19:08.841 "dhgroup": "ffdhe4096" 00:19:08.841 } 00:19:08.841 } 00:19:08.841 ]' 00:19:08.841 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:08.841 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.841 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:08.841 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:08.841 19:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:08.841 19:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.841 19:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.842 19:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.102 19:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDA1ZDM4YjRjYjcyODYxODY5MTMzZGFjMmU2ZjBlMzWEOGg3: 00:19:10.044 19:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.044 19:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:10.044 19:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.044 19:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.044 19:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.044 19:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:10.044 19:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:10.044 19:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:10.044 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:19:10.044 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:10.044 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:10.044 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:10.044 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:10.044 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:19:10.044 19:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.044 19:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.044 19:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.044 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:10.044 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:10.617 00:19:10.617 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:10.617 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.617 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:10.617 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.617 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.617 19:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.617 19:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.617 19:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.617 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:10.617 { 00:19:10.617 "cntlid": 29, 00:19:10.617 "qid": 0, 00:19:10.617 "state": "enabled", 00:19:10.617 "listen_address": { 00:19:10.617 "trtype": "TCP", 00:19:10.617 "adrfam": "IPv4", 00:19:10.617 "traddr": "10.0.0.2", 00:19:10.617 "trsvcid": "4420" 00:19:10.617 }, 00:19:10.617 "peer_address": { 00:19:10.617 "trtype": "TCP", 00:19:10.617 "adrfam": "IPv4", 00:19:10.617 "traddr": "10.0.0.1", 00:19:10.617 "trsvcid": "36454" 00:19:10.617 }, 00:19:10.617 "auth": { 00:19:10.617 "state": "completed", 00:19:10.617 "digest": "sha256", 00:19:10.617 "dhgroup": "ffdhe4096" 00:19:10.617 } 00:19:10.617 } 00:19:10.617 ]' 00:19:10.617 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:10.878 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.878 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:10.878 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:10.878 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:10.878 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.878 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.878 19:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.138 19:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZWIyZjg3NjQ5ODNhNGQyOWI2NmRmNDY0MzBiNzViMTMxMjU1Y2JmYmNlZTc1OTcxvm1qIQ==: 00:19:11.710 19:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.710 19:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:11.710 19:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.710 19:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.710 19:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.710 19:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:11.710 19:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:11.710 19:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:11.971 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:19:11.972 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:11.972 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:11.972 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:11.972 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:11.972 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:11.972 19:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.972 19:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.972 19:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.972 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.972 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.233 00:19:12.493 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:12.493 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:12.493 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.493 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.493 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.493 19:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.493 19:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.493 19:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.493 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:12.493 { 00:19:12.493 "cntlid": 31, 00:19:12.494 "qid": 0, 00:19:12.494 "state": "enabled", 00:19:12.494 "listen_address": { 00:19:12.494 "trtype": "TCP", 00:19:12.494 "adrfam": "IPv4", 00:19:12.494 "traddr": "10.0.0.2", 00:19:12.494 "trsvcid": "4420" 00:19:12.494 }, 00:19:12.494 "peer_address": { 00:19:12.494 "trtype": "TCP", 00:19:12.494 "adrfam": "IPv4", 00:19:12.494 "traddr": "10.0.0.1", 00:19:12.494 "trsvcid": "36460" 00:19:12.494 }, 00:19:12.494 "auth": { 00:19:12.494 "state": "completed", 00:19:12.494 "digest": "sha256", 00:19:12.494 "dhgroup": "ffdhe4096" 00:19:12.494 } 00:19:12.494 } 00:19:12.494 ]' 00:19:12.494 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:12.754 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.754 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:12.754 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:12.754 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:12.754 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.754 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.754 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.015 19:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:YTI0MmFmM2IxMzE3NTYxNjZjYjkyYzFhNDcwMmIzZTAyOWNkOTdjNDEzMDU0NjBiYjZkNmI2NmY2NTA0ZjIyZq7CFVo=: 00:19:13.586 19:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.586 19:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:13.586 19:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.586 19:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.586 19:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.586 19:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.586 19:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:13.586 19:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:13.586 19:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:13.846 19:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:19:13.846 19:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:13.846 19:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.846 19:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:13.846 19:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:13.846 19:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:19:13.846 19:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.846 19:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.846 19:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.846 19:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:13.846 19:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:14.419 00:19:14.419 19:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:14.419 19:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:14.419 19:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.419 19:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.419 19:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.419 19:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.419 19:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.419 19:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.419 19:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:14.419 { 00:19:14.419 "cntlid": 33, 00:19:14.419 "qid": 0, 00:19:14.419 "state": "enabled", 00:19:14.419 "listen_address": { 00:19:14.419 "trtype": "TCP", 00:19:14.419 "adrfam": "IPv4", 00:19:14.419 "traddr": "10.0.0.2", 00:19:14.419 "trsvcid": "4420" 00:19:14.419 }, 00:19:14.419 "peer_address": { 00:19:14.419 "trtype": "TCP", 00:19:14.419 "adrfam": "IPv4", 00:19:14.419 "traddr": "10.0.0.1", 00:19:14.419 "trsvcid": "49084" 00:19:14.419 }, 00:19:14.419 "auth": { 00:19:14.419 "state": "completed", 00:19:14.419 "digest": "sha256", 00:19:14.419 "dhgroup": "ffdhe6144" 00:19:14.419 } 00:19:14.419 } 00:19:14.419 ]' 00:19:14.680 19:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:14.680 19:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.680 19:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:14.680 19:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:14.680 19:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:14.680 19:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.680 19:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.680 19:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.940 19:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZWE5M2FjMmM5NzE0ZThjOGQ3YWE2Y2QwYzU5MTIwN2Q3NjBiMzYyYzA0YTM5OTNiBPoO1A==: 00:19:15.533 19:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.533 19:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:15.533 19:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.533 19:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.883 19:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.883 19:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:15.883 19:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:15.883 19:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:15.883 19:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:19:15.883 19:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:15.883 19:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:15.883 19:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:15.883 19:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:15.883 19:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:15.883 19:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.883 19:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.883 19:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.883 19:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:15.883 19:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:16.144 00:19:16.404 19:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:16.404 19:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:16.404 19:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.404 19:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.404 19:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.404 19:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.404 19:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.404 19:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.404 19:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:16.404 { 00:19:16.404 "cntlid": 35, 00:19:16.404 "qid": 0, 00:19:16.404 "state": "enabled", 00:19:16.404 "listen_address": { 00:19:16.404 "trtype": "TCP", 00:19:16.404 "adrfam": "IPv4", 00:19:16.404 "traddr": "10.0.0.2", 00:19:16.404 "trsvcid": "4420" 00:19:16.404 }, 00:19:16.404 "peer_address": { 00:19:16.404 "trtype": "TCP", 00:19:16.404 "adrfam": "IPv4", 00:19:16.404 "traddr": "10.0.0.1", 00:19:16.404 "trsvcid": "49100" 00:19:16.404 }, 00:19:16.404 "auth": { 00:19:16.404 "state": "completed", 00:19:16.404 "digest": "sha256", 00:19:16.404 "dhgroup": "ffdhe6144" 00:19:16.404 } 00:19:16.404 } 00:19:16.404 ]' 00:19:16.404 19:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:16.665 19:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.665 19:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:16.665 19:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:16.665 19:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:16.665 19:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.665 19:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.665 19:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.925 19:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDA1ZDM4YjRjYjcyODYxODY5MTMzZGFjMmU2ZjBlMzWEOGg3: 00:19:17.496 19:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.496 19:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:17.496 19:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.496 19:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.496 19:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.496 19:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:17.496 19:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:17.496 19:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:17.756 19:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:19:17.756 19:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:17.756 19:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:17.756 19:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:17.756 19:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:17.756 19:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:19:17.756 19:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.756 19:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.756 19:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.756 19:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:17.756 19:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:18.328 00:19:18.328 19:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:18.328 19:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.328 19:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:18.588 19:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.588 19:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.588 19:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.588 19:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.588 19:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.588 19:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:18.588 { 00:19:18.588 "cntlid": 37, 00:19:18.588 "qid": 0, 00:19:18.588 "state": "enabled", 00:19:18.588 "listen_address": { 00:19:18.588 "trtype": "TCP", 00:19:18.588 "adrfam": "IPv4", 00:19:18.588 "traddr": "10.0.0.2", 00:19:18.588 "trsvcid": "4420" 00:19:18.588 }, 00:19:18.588 "peer_address": { 00:19:18.588 "trtype": "TCP", 00:19:18.588 "adrfam": "IPv4", 00:19:18.588 "traddr": "10.0.0.1", 00:19:18.588 "trsvcid": "49136" 00:19:18.588 }, 00:19:18.588 "auth": { 00:19:18.588 "state": "completed", 00:19:18.588 "digest": "sha256", 00:19:18.588 "dhgroup": "ffdhe6144" 00:19:18.588 } 00:19:18.588 } 00:19:18.588 ]' 00:19:18.588 19:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:18.588 19:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.588 19:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:18.588 19:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:18.589 19:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:18.589 19:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.589 19:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.589 19:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.849 19:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZWIyZjg3NjQ5ODNhNGQyOWI2NmRmNDY0MzBiNzViMTMxMjU1Y2JmYmNlZTc1OTcxvm1qIQ==: 00:19:19.791 19:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.791 19:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:19.791 19:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.791 19:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.791 19:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.791 19:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:19.791 19:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:19.791 19:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:19.791 19:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:19:19.791 19:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:19.791 19:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:19.791 19:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:19.791 19:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:19.791 19:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:19.791 19:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.791 19:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.791 19:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.792 19:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.792 19:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.363 00:19:20.363 19:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:20.363 19:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.363 19:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:20.363 19:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.363 19:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.363 19:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.363 19:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.363 19:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.363 19:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:20.363 { 00:19:20.363 "cntlid": 39, 00:19:20.363 "qid": 0, 00:19:20.363 "state": "enabled", 00:19:20.363 "listen_address": { 00:19:20.363 "trtype": "TCP", 00:19:20.363 "adrfam": "IPv4", 00:19:20.363 "traddr": "10.0.0.2", 00:19:20.363 "trsvcid": "4420" 00:19:20.363 }, 00:19:20.363 "peer_address": { 00:19:20.363 "trtype": "TCP", 00:19:20.363 "adrfam": "IPv4", 00:19:20.363 "traddr": "10.0.0.1", 00:19:20.363 "trsvcid": "49168" 00:19:20.363 }, 00:19:20.363 "auth": { 00:19:20.363 "state": "completed", 00:19:20.363 "digest": "sha256", 00:19:20.363 "dhgroup": "ffdhe6144" 00:19:20.363 } 00:19:20.363 } 00:19:20.363 ]' 00:19:20.363 19:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:20.624 19:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.624 19:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:20.624 19:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:20.624 19:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:20.624 19:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.624 19:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.624 19:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.885 19:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:YTI0MmFmM2IxMzE3NTYxNjZjYjkyYzFhNDcwMmIzZTAyOWNkOTdjNDEzMDU0NjBiYjZkNmI2NmY2NTA0ZjIyZq7CFVo=: 00:19:21.456 19:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.456 19:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:21.456 19:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.456 19:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.456 19:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.456 19:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.456 19:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:21.456 19:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:21.456 19:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:21.717 19:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:19:21.717 19:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:21.717 19:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:21.717 19:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:21.717 19:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:21.717 19:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:19:21.717 19:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.717 19:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.717 19:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.717 19:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:21.717 19:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:22.289 00:19:22.289 19:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:22.289 19:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:22.289 19:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.550 19:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.550 19:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.550 19:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.550 19:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.550 19:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.550 19:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:22.550 { 00:19:22.550 "cntlid": 41, 00:19:22.550 "qid": 0, 00:19:22.550 "state": "enabled", 00:19:22.550 "listen_address": { 00:19:22.550 "trtype": "TCP", 00:19:22.550 "adrfam": "IPv4", 00:19:22.550 "traddr": "10.0.0.2", 00:19:22.550 "trsvcid": "4420" 00:19:22.550 }, 00:19:22.550 "peer_address": { 00:19:22.550 "trtype": "TCP", 00:19:22.550 "adrfam": "IPv4", 00:19:22.550 "traddr": "10.0.0.1", 00:19:22.550 "trsvcid": "49202" 00:19:22.550 }, 00:19:22.550 "auth": { 00:19:22.550 "state": "completed", 00:19:22.550 "digest": "sha256", 00:19:22.550 "dhgroup": "ffdhe8192" 00:19:22.550 } 00:19:22.550 } 00:19:22.550 ]' 00:19:22.550 19:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:22.811 19:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.811 19:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:22.811 19:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:22.811 19:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:22.811 19:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.811 19:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.811 19:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.071 19:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZWE5M2FjMmM5NzE0ZThjOGQ3YWE2Y2QwYzU5MTIwN2Q3NjBiMzYyYzA0YTM5OTNiBPoO1A==: 00:19:23.642 19:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.642 19:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:23.642 19:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.642 19:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.642 19:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.642 19:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:23.642 19:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:23.642 19:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:23.903 19:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:19:23.903 19:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:23.903 19:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:23.903 19:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:23.903 19:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:23.903 19:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:23.903 19:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.903 19:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.903 19:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.903 19:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:23.903 19:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:24.477 00:19:24.477 19:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:24.477 19:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:24.477 19:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.738 19:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.738 19:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.738 19:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.738 19:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.738 19:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.738 19:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:24.738 { 00:19:24.738 "cntlid": 43, 00:19:24.738 "qid": 0, 00:19:24.738 "state": "enabled", 00:19:24.738 "listen_address": { 00:19:24.738 "trtype": "TCP", 00:19:24.738 "adrfam": "IPv4", 00:19:24.738 "traddr": "10.0.0.2", 00:19:24.738 "trsvcid": "4420" 00:19:24.738 }, 00:19:24.738 "peer_address": { 00:19:24.738 "trtype": "TCP", 00:19:24.738 "adrfam": "IPv4", 00:19:24.738 "traddr": "10.0.0.1", 00:19:24.738 "trsvcid": "39656" 00:19:24.738 }, 00:19:24.738 "auth": { 00:19:24.738 "state": "completed", 00:19:24.738 "digest": "sha256", 00:19:24.738 "dhgroup": "ffdhe8192" 00:19:24.738 } 00:19:24.738 } 00:19:24.738 ]' 00:19:24.738 19:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:24.738 19:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.738 19:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:25.000 19:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:25.000 19:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:25.000 19:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.000 19:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.000 19:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.260 19:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDA1ZDM4YjRjYjcyODYxODY5MTMzZGFjMmU2ZjBlMzWEOGg3: 00:19:25.831 19:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.831 19:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:25.831 19:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.831 19:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.831 19:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.831 19:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:25.831 19:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:25.831 19:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:26.092 19:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:19:26.092 19:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:26.092 19:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.092 19:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:26.092 19:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:26.092 19:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:19:26.092 19:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.092 19:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.092 19:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.092 19:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:26.092 19:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:26.664 00:19:26.664 19:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:26.664 19:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.664 19:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:26.930 19:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.930 19:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.930 19:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.930 19:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.930 19:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.930 19:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:26.930 { 00:19:26.930 "cntlid": 45, 00:19:26.930 "qid": 0, 00:19:26.930 "state": "enabled", 00:19:26.930 "listen_address": { 00:19:26.930 "trtype": "TCP", 00:19:26.930 "adrfam": "IPv4", 00:19:26.930 "traddr": "10.0.0.2", 00:19:26.930 "trsvcid": "4420" 00:19:26.930 }, 00:19:26.930 "peer_address": { 00:19:26.930 "trtype": "TCP", 00:19:26.930 "adrfam": "IPv4", 00:19:26.930 "traddr": "10.0.0.1", 00:19:26.930 "trsvcid": "39688" 00:19:26.930 }, 00:19:26.930 "auth": { 00:19:26.930 "state": "completed", 00:19:26.930 "digest": "sha256", 00:19:26.930 "dhgroup": "ffdhe8192" 00:19:26.930 } 00:19:26.930 } 00:19:26.930 ]' 00:19:26.930 19:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:26.930 19:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.930 19:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:27.192 19:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:27.192 19:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:27.192 19:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.192 19:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.192 19:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.453 19:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZWIyZjg3NjQ5ODNhNGQyOWI2NmRmNDY0MzBiNzViMTMxMjU1Y2JmYmNlZTc1OTcxvm1qIQ==: 00:19:28.025 19:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.025 19:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:28.025 19:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.025 19:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.025 19:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.025 19:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:28.025 19:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:28.025 19:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:28.286 19:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:19:28.286 19:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:28.286 19:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:28.286 19:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:28.286 19:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:28.286 19:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:28.286 19:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.286 19:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.286 19:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.286 19:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.286 19:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.856 00:19:28.856 19:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:28.856 19:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.856 19:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:29.117 19:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.117 19:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.117 19:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.117 19:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.117 19:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.117 19:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:29.117 { 00:19:29.117 "cntlid": 47, 00:19:29.117 "qid": 0, 00:19:29.117 "state": "enabled", 00:19:29.117 "listen_address": { 00:19:29.117 "trtype": "TCP", 00:19:29.117 "adrfam": "IPv4", 00:19:29.117 "traddr": "10.0.0.2", 00:19:29.117 "trsvcid": "4420" 00:19:29.117 }, 00:19:29.117 "peer_address": { 00:19:29.117 "trtype": "TCP", 00:19:29.117 "adrfam": "IPv4", 00:19:29.117 "traddr": "10.0.0.1", 00:19:29.117 "trsvcid": "39718" 00:19:29.117 }, 00:19:29.117 "auth": { 00:19:29.117 "state": "completed", 00:19:29.118 "digest": "sha256", 00:19:29.118 "dhgroup": "ffdhe8192" 00:19:29.118 } 00:19:29.118 } 00:19:29.118 ]' 00:19:29.118 19:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:29.118 19:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.118 19:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:29.378 19:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:29.378 19:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:29.378 19:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.378 19:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.378 19:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.638 19:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:YTI0MmFmM2IxMzE3NTYxNjZjYjkyYzFhNDcwMmIzZTAyOWNkOTdjNDEzMDU0NjBiYjZkNmI2NmY2NTA0ZjIyZq7CFVo=: 00:19:30.210 19:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.210 19:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:30.210 19:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.210 19:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.210 19:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.210 19:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:19:30.210 19:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:30.210 19:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:30.210 19:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:30.210 19:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:30.471 19:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:19:30.471 19:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:30.471 19:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:30.471 19:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:30.471 19:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:30.471 19:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:19:30.471 19:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.471 19:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.471 19:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.471 19:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:30.471 19:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:30.732 00:19:30.732 19:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:30.732 19:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:30.732 19:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.993 19:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.993 19:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.993 19:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.993 19:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.993 19:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.993 19:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:30.993 { 00:19:30.993 "cntlid": 49, 00:19:30.993 "qid": 0, 00:19:30.993 "state": "enabled", 00:19:30.993 "listen_address": { 00:19:30.993 "trtype": "TCP", 00:19:30.993 "adrfam": "IPv4", 00:19:30.993 "traddr": "10.0.0.2", 00:19:30.993 "trsvcid": "4420" 00:19:30.993 }, 00:19:30.993 "peer_address": { 00:19:30.993 "trtype": "TCP", 00:19:30.993 "adrfam": "IPv4", 00:19:30.993 "traddr": "10.0.0.1", 00:19:30.993 "trsvcid": "39752" 00:19:30.993 }, 00:19:30.993 "auth": { 00:19:30.993 "state": "completed", 00:19:30.993 "digest": "sha384", 00:19:30.993 "dhgroup": "null" 00:19:30.993 } 00:19:30.993 } 00:19:30.993 ]' 00:19:30.993 19:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:30.993 19:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:30.993 19:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:30.993 19:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:30.993 19:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:31.254 19:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.254 19:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.254 19:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.254 19:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZWE5M2FjMmM5NzE0ZThjOGQ3YWE2Y2QwYzU5MTIwN2Q3NjBiMzYyYzA0YTM5OTNiBPoO1A==: 00:19:32.197 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.197 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:32.197 19:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.197 19:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.197 19:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.197 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:32.197 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:32.197 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:32.197 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:19:32.197 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:32.197 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:32.197 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:32.197 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:32.197 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:32.197 19:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.197 19:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.197 19:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.197 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:32.197 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:32.458 00:19:32.720 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:32.720 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:32.720 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.720 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.720 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.720 19:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.720 19:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.720 19:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.720 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:32.720 { 00:19:32.720 "cntlid": 51, 00:19:32.720 "qid": 0, 00:19:32.720 "state": "enabled", 00:19:32.720 "listen_address": { 00:19:32.720 "trtype": "TCP", 00:19:32.720 "adrfam": "IPv4", 00:19:32.720 "traddr": "10.0.0.2", 00:19:32.720 "trsvcid": "4420" 00:19:32.720 }, 00:19:32.720 "peer_address": { 00:19:32.720 "trtype": "TCP", 00:19:32.720 "adrfam": "IPv4", 00:19:32.720 "traddr": "10.0.0.1", 00:19:32.720 "trsvcid": "39792" 00:19:32.720 }, 00:19:32.720 "auth": { 00:19:32.720 "state": "completed", 00:19:32.720 "digest": "sha384", 00:19:32.721 "dhgroup": "null" 00:19:32.721 } 00:19:32.721 } 00:19:32.721 ]' 00:19:32.721 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:32.982 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:32.982 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:32.982 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:32.982 19:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:32.982 19:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.982 19:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.982 19:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.243 19:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDA1ZDM4YjRjYjcyODYxODY5MTMzZGFjMmU2ZjBlMzWEOGg3: 00:19:33.814 19:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.814 19:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:33.814 19:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.814 19:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.814 19:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.814 19:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:33.814 19:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:33.814 19:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:34.076 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:19:34.076 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:34.076 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:34.076 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:34.076 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:34.076 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:19:34.076 19:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.076 19:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.076 19:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.076 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:34.076 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:34.337 00:19:34.337 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:34.337 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:34.337 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.598 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.598 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.598 19:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.598 19:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.598 19:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.598 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:34.598 { 00:19:34.598 "cntlid": 53, 00:19:34.598 "qid": 0, 00:19:34.598 "state": "enabled", 00:19:34.598 "listen_address": { 00:19:34.598 "trtype": "TCP", 00:19:34.598 "adrfam": "IPv4", 00:19:34.598 "traddr": "10.0.0.2", 00:19:34.598 "trsvcid": "4420" 00:19:34.598 }, 00:19:34.598 "peer_address": { 00:19:34.598 "trtype": "TCP", 00:19:34.598 "adrfam": "IPv4", 00:19:34.598 "traddr": "10.0.0.1", 00:19:34.598 "trsvcid": "55704" 00:19:34.598 }, 00:19:34.598 "auth": { 00:19:34.598 "state": "completed", 00:19:34.598 "digest": "sha384", 00:19:34.598 "dhgroup": "null" 00:19:34.598 } 00:19:34.598 } 00:19:34.598 ]' 00:19:34.598 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:34.598 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.598 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:34.598 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:34.598 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:34.860 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.860 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.860 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.860 19:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZWIyZjg3NjQ5ODNhNGQyOWI2NmRmNDY0MzBiNzViMTMxMjU1Y2JmYmNlZTc1OTcxvm1qIQ==: 00:19:35.798 19:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.798 19:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:35.798 19:35:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.798 19:35:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.798 19:35:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.798 19:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:35.798 19:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:35.798 19:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:35.798 19:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:19:35.798 19:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:35.798 19:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:35.798 19:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:35.798 19:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:35.798 19:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:35.798 19:35:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.798 19:35:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.798 19:35:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.798 19:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.798 19:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.058 00:19:36.058 19:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:36.058 19:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.058 19:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:36.318 19:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.318 19:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.318 19:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.318 19:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.318 19:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.318 19:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:36.318 { 00:19:36.318 "cntlid": 55, 00:19:36.318 "qid": 0, 00:19:36.318 "state": "enabled", 00:19:36.318 "listen_address": { 00:19:36.318 "trtype": "TCP", 00:19:36.318 "adrfam": "IPv4", 00:19:36.318 "traddr": "10.0.0.2", 00:19:36.318 "trsvcid": "4420" 00:19:36.318 }, 00:19:36.318 "peer_address": { 00:19:36.318 "trtype": "TCP", 00:19:36.318 "adrfam": "IPv4", 00:19:36.318 "traddr": "10.0.0.1", 00:19:36.318 "trsvcid": "55744" 00:19:36.318 }, 00:19:36.318 "auth": { 00:19:36.318 "state": "completed", 00:19:36.318 "digest": "sha384", 00:19:36.318 "dhgroup": "null" 00:19:36.318 } 00:19:36.318 } 00:19:36.318 ]' 00:19:36.318 19:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:36.318 19:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.318 19:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:36.579 19:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:36.579 19:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:36.579 19:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.579 19:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.579 19:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.851 19:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:YTI0MmFmM2IxMzE3NTYxNjZjYjkyYzFhNDcwMmIzZTAyOWNkOTdjNDEzMDU0NjBiYjZkNmI2NmY2NTA0ZjIyZq7CFVo=: 00:19:37.427 19:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.427 19:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:37.427 19:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.427 19:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.427 19:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.427 19:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.427 19:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:37.427 19:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:37.427 19:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:37.688 19:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:19:37.688 19:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:37.688 19:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:37.688 19:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:37.688 19:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:37.688 19:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:19:37.688 19:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.688 19:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.688 19:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.688 19:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:37.688 19:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:37.949 00:19:37.949 19:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:37.949 19:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.949 19:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:38.210 19:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.210 19:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.210 19:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.210 19:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.210 19:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.210 19:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:38.210 { 00:19:38.210 "cntlid": 57, 00:19:38.210 "qid": 0, 00:19:38.210 "state": "enabled", 00:19:38.210 "listen_address": { 00:19:38.210 "trtype": "TCP", 00:19:38.210 "adrfam": "IPv4", 00:19:38.210 "traddr": "10.0.0.2", 00:19:38.210 "trsvcid": "4420" 00:19:38.210 }, 00:19:38.210 "peer_address": { 00:19:38.210 "trtype": "TCP", 00:19:38.210 "adrfam": "IPv4", 00:19:38.210 "traddr": "10.0.0.1", 00:19:38.210 "trsvcid": "55776" 00:19:38.210 }, 00:19:38.210 "auth": { 00:19:38.210 "state": "completed", 00:19:38.210 "digest": "sha384", 00:19:38.210 "dhgroup": "ffdhe2048" 00:19:38.210 } 00:19:38.210 } 00:19:38.210 ]' 00:19:38.210 19:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:38.210 19:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.210 19:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:38.210 19:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:38.210 19:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:38.210 19:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.210 19:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.210 19:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.471 19:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZWE5M2FjMmM5NzE0ZThjOGQ3YWE2Y2QwYzU5MTIwN2Q3NjBiMzYyYzA0YTM5OTNiBPoO1A==: 00:19:39.413 19:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.413 19:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:39.413 19:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.413 19:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.413 19:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.413 19:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:39.413 19:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:39.413 19:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:39.413 19:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:19:39.413 19:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:39.413 19:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:39.413 19:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:39.413 19:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:39.413 19:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:39.413 19:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.413 19:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.413 19:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.413 19:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:39.413 19:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:39.674 00:19:39.674 19:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:39.674 19:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.674 19:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:39.935 19:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.935 19:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.935 19:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.935 19:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.935 19:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.935 19:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:39.935 { 00:19:39.935 "cntlid": 59, 00:19:39.935 "qid": 0, 00:19:39.935 "state": "enabled", 00:19:39.935 "listen_address": { 00:19:39.935 "trtype": "TCP", 00:19:39.935 "adrfam": "IPv4", 00:19:39.935 "traddr": "10.0.0.2", 00:19:39.935 "trsvcid": "4420" 00:19:39.935 }, 00:19:39.935 "peer_address": { 00:19:39.935 "trtype": "TCP", 00:19:39.935 "adrfam": "IPv4", 00:19:39.935 "traddr": "10.0.0.1", 00:19:39.935 "trsvcid": "55808" 00:19:39.935 }, 00:19:39.935 "auth": { 00:19:39.935 "state": "completed", 00:19:39.935 "digest": "sha384", 00:19:39.935 "dhgroup": "ffdhe2048" 00:19:39.935 } 00:19:39.935 } 00:19:39.935 ]' 00:19:39.935 19:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:39.935 19:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:39.935 19:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:40.195 19:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:40.195 19:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:40.195 19:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.195 19:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.195 19:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.195 19:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDA1ZDM4YjRjYjcyODYxODY5MTMzZGFjMmU2ZjBlMzWEOGg3: 00:19:41.134 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.134 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:41.134 19:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.134 19:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.134 19:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.134 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:41.134 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:41.134 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:41.395 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:19:41.395 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:41.395 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:41.395 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:41.395 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:41.395 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:19:41.395 19:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.395 19:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.395 19:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.396 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:41.396 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:41.656 00:19:41.656 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:41.656 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:41.656 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.656 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.656 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.656 19:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.919 19:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.919 19:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.919 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:41.919 { 00:19:41.919 "cntlid": 61, 00:19:41.919 "qid": 0, 00:19:41.919 "state": "enabled", 00:19:41.919 "listen_address": { 00:19:41.919 "trtype": "TCP", 00:19:41.919 "adrfam": "IPv4", 00:19:41.919 "traddr": "10.0.0.2", 00:19:41.919 "trsvcid": "4420" 00:19:41.920 }, 00:19:41.920 "peer_address": { 00:19:41.920 "trtype": "TCP", 00:19:41.920 "adrfam": "IPv4", 00:19:41.920 "traddr": "10.0.0.1", 00:19:41.920 "trsvcid": "55832" 00:19:41.920 }, 00:19:41.920 "auth": { 00:19:41.920 "state": "completed", 00:19:41.920 "digest": "sha384", 00:19:41.920 "dhgroup": "ffdhe2048" 00:19:41.920 } 00:19:41.920 } 00:19:41.920 ]' 00:19:41.920 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:41.920 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.920 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:41.920 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:41.920 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:41.920 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.920 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.920 19:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.217 19:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZWIyZjg3NjQ5ODNhNGQyOWI2NmRmNDY0MzBiNzViMTMxMjU1Y2JmYmNlZTc1OTcxvm1qIQ==: 00:19:42.823 19:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.823 19:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:42.823 19:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.823 19:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.823 19:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.823 19:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:42.823 19:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:42.823 19:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:43.084 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:19:43.084 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:43.084 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:43.084 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:43.084 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:43.084 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:43.084 19:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.084 19:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.084 19:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.084 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.084 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.344 00:19:43.344 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:43.344 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:43.345 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.605 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.605 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.605 19:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.605 19:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.605 19:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.605 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:43.605 { 00:19:43.605 "cntlid": 63, 00:19:43.605 "qid": 0, 00:19:43.605 "state": "enabled", 00:19:43.605 "listen_address": { 00:19:43.605 "trtype": "TCP", 00:19:43.605 "adrfam": "IPv4", 00:19:43.605 "traddr": "10.0.0.2", 00:19:43.605 "trsvcid": "4420" 00:19:43.605 }, 00:19:43.605 "peer_address": { 00:19:43.605 "trtype": "TCP", 00:19:43.605 "adrfam": "IPv4", 00:19:43.605 "traddr": "10.0.0.1", 00:19:43.605 "trsvcid": "55854" 00:19:43.605 }, 00:19:43.605 "auth": { 00:19:43.605 "state": "completed", 00:19:43.605 "digest": "sha384", 00:19:43.605 "dhgroup": "ffdhe2048" 00:19:43.605 } 00:19:43.605 } 00:19:43.605 ]' 00:19:43.605 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:43.605 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:43.605 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:43.605 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:43.605 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:43.865 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.865 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.865 19:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.865 19:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:YTI0MmFmM2IxMzE3NTYxNjZjYjkyYzFhNDcwMmIzZTAyOWNkOTdjNDEzMDU0NjBiYjZkNmI2NmY2NTA0ZjIyZq7CFVo=: 00:19:44.806 19:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.806 19:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:44.806 19:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.806 19:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.806 19:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.806 19:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.806 19:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:44.806 19:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:44.806 19:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:45.067 19:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:19:45.067 19:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:45.067 19:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:45.067 19:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:45.067 19:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:45.067 19:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:19:45.067 19:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.067 19:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.067 19:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.067 19:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:45.067 19:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:45.328 00:19:45.328 19:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:45.328 19:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:45.328 19:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.587 19:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.587 19:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.587 19:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.587 19:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.587 19:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.587 19:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:45.587 { 00:19:45.587 "cntlid": 65, 00:19:45.587 "qid": 0, 00:19:45.587 "state": "enabled", 00:19:45.587 "listen_address": { 00:19:45.587 "trtype": "TCP", 00:19:45.587 "adrfam": "IPv4", 00:19:45.587 "traddr": "10.0.0.2", 00:19:45.587 "trsvcid": "4420" 00:19:45.587 }, 00:19:45.587 "peer_address": { 00:19:45.587 "trtype": "TCP", 00:19:45.587 "adrfam": "IPv4", 00:19:45.587 "traddr": "10.0.0.1", 00:19:45.587 "trsvcid": "46034" 00:19:45.587 }, 00:19:45.587 "auth": { 00:19:45.587 "state": "completed", 00:19:45.587 "digest": "sha384", 00:19:45.587 "dhgroup": "ffdhe3072" 00:19:45.587 } 00:19:45.587 } 00:19:45.587 ]' 00:19:45.587 19:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:45.587 19:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.587 19:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:45.587 19:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:45.587 19:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:45.587 19:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.587 19:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.587 19:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.848 19:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZWE5M2FjMmM5NzE0ZThjOGQ3YWE2Y2QwYzU5MTIwN2Q3NjBiMzYyYzA0YTM5OTNiBPoO1A==: 00:19:46.794 19:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.794 19:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:46.794 19:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.794 19:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.794 19:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.794 19:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:46.794 19:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:46.794 19:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:46.794 19:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:19:46.794 19:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:46.794 19:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:46.794 19:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:46.794 19:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:46.794 19:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:46.794 19:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.794 19:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.794 19:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.794 19:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:46.794 19:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:47.056 00:19:47.056 19:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:47.056 19:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:47.056 19:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.317 19:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.317 19:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.317 19:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.317 19:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.317 19:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.317 19:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:47.317 { 00:19:47.317 "cntlid": 67, 00:19:47.317 "qid": 0, 00:19:47.317 "state": "enabled", 00:19:47.317 "listen_address": { 00:19:47.317 "trtype": "TCP", 00:19:47.317 "adrfam": "IPv4", 00:19:47.317 "traddr": "10.0.0.2", 00:19:47.317 "trsvcid": "4420" 00:19:47.317 }, 00:19:47.317 "peer_address": { 00:19:47.317 "trtype": "TCP", 00:19:47.317 "adrfam": "IPv4", 00:19:47.317 "traddr": "10.0.0.1", 00:19:47.317 "trsvcid": "46070" 00:19:47.317 }, 00:19:47.317 "auth": { 00:19:47.317 "state": "completed", 00:19:47.317 "digest": "sha384", 00:19:47.317 "dhgroup": "ffdhe3072" 00:19:47.317 } 00:19:47.317 } 00:19:47.317 ]' 00:19:47.317 19:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:47.317 19:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:47.317 19:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:47.317 19:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:47.317 19:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:47.578 19:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.578 19:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.578 19:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.578 19:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDA1ZDM4YjRjYjcyODYxODY5MTMzZGFjMmU2ZjBlMzWEOGg3: 00:19:48.521 19:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.521 19:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:48.521 19:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.521 19:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.521 19:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.521 19:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:48.521 19:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:48.521 19:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:48.521 19:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:19:48.521 19:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:48.521 19:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:48.521 19:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:48.521 19:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:48.521 19:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:19:48.521 19:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.521 19:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.521 19:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.521 19:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:48.521 19:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:48.781 00:19:49.041 19:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:49.041 19:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.041 19:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:49.041 19:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.041 19:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.041 19:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.041 19:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.041 19:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.041 19:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:49.041 { 00:19:49.041 "cntlid": 69, 00:19:49.041 "qid": 0, 00:19:49.041 "state": "enabled", 00:19:49.041 "listen_address": { 00:19:49.041 "trtype": "TCP", 00:19:49.041 "adrfam": "IPv4", 00:19:49.041 "traddr": "10.0.0.2", 00:19:49.041 "trsvcid": "4420" 00:19:49.041 }, 00:19:49.041 "peer_address": { 00:19:49.041 "trtype": "TCP", 00:19:49.041 "adrfam": "IPv4", 00:19:49.041 "traddr": "10.0.0.1", 00:19:49.041 "trsvcid": "46090" 00:19:49.041 }, 00:19:49.041 "auth": { 00:19:49.041 "state": "completed", 00:19:49.041 "digest": "sha384", 00:19:49.041 "dhgroup": "ffdhe3072" 00:19:49.041 } 00:19:49.041 } 00:19:49.041 ]' 00:19:49.041 19:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:49.302 19:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.302 19:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:49.302 19:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:49.302 19:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:49.302 19:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.302 19:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.302 19:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.562 19:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZWIyZjg3NjQ5ODNhNGQyOWI2NmRmNDY0MzBiNzViMTMxMjU1Y2JmYmNlZTc1OTcxvm1qIQ==: 00:19:50.133 19:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.394 19:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:50.394 19:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.394 19:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.394 19:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.394 19:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:50.394 19:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:50.394 19:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:50.394 19:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:19:50.394 19:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:50.394 19:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:50.394 19:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:50.394 19:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:50.394 19:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:50.395 19:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.395 19:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.395 19:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.395 19:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.395 19:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.967 00:19:50.967 19:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:50.967 19:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:50.967 19:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.967 19:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.967 19:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.967 19:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.967 19:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.967 19:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.967 19:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:50.967 { 00:19:50.967 "cntlid": 71, 00:19:50.967 "qid": 0, 00:19:50.967 "state": "enabled", 00:19:50.967 "listen_address": { 00:19:50.967 "trtype": "TCP", 00:19:50.967 "adrfam": "IPv4", 00:19:50.967 "traddr": "10.0.0.2", 00:19:50.967 "trsvcid": "4420" 00:19:50.967 }, 00:19:50.967 "peer_address": { 00:19:50.967 "trtype": "TCP", 00:19:50.967 "adrfam": "IPv4", 00:19:50.967 "traddr": "10.0.0.1", 00:19:50.967 "trsvcid": "46116" 00:19:50.967 }, 00:19:50.967 "auth": { 00:19:50.967 "state": "completed", 00:19:50.967 "digest": "sha384", 00:19:50.967 "dhgroup": "ffdhe3072" 00:19:50.967 } 00:19:50.967 } 00:19:50.967 ]' 00:19:50.967 19:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:50.967 19:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.967 19:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:51.227 19:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:51.227 19:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:51.227 19:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.227 19:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.227 19:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.488 19:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:YTI0MmFmM2IxMzE3NTYxNjZjYjkyYzFhNDcwMmIzZTAyOWNkOTdjNDEzMDU0NjBiYjZkNmI2NmY2NTA0ZjIyZq7CFVo=: 00:19:52.057 19:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.057 19:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:52.057 19:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.057 19:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.057 19:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.057 19:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.057 19:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:52.057 19:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:52.057 19:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:52.318 19:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:19:52.318 19:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:52.318 19:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:52.318 19:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:52.318 19:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:52.318 19:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:19:52.318 19:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.318 19:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.318 19:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.318 19:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:52.318 19:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:52.578 00:19:52.838 19:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:52.838 19:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:52.838 19:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.838 19:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.838 19:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.838 19:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.838 19:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.838 19:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.838 19:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:52.838 { 00:19:52.838 "cntlid": 73, 00:19:52.838 "qid": 0, 00:19:52.838 "state": "enabled", 00:19:52.838 "listen_address": { 00:19:52.838 "trtype": "TCP", 00:19:52.838 "adrfam": "IPv4", 00:19:52.838 "traddr": "10.0.0.2", 00:19:52.838 "trsvcid": "4420" 00:19:52.838 }, 00:19:52.838 "peer_address": { 00:19:52.838 "trtype": "TCP", 00:19:52.838 "adrfam": "IPv4", 00:19:52.838 "traddr": "10.0.0.1", 00:19:52.838 "trsvcid": "46134" 00:19:52.838 }, 00:19:52.838 "auth": { 00:19:52.838 "state": "completed", 00:19:52.838 "digest": "sha384", 00:19:52.838 "dhgroup": "ffdhe4096" 00:19:52.838 } 00:19:52.838 } 00:19:52.838 ]' 00:19:52.838 19:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:52.839 19:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.839 19:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:53.100 19:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:53.100 19:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:53.100 19:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.100 19:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.100 19:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.361 19:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZWE5M2FjMmM5NzE0ZThjOGQ3YWE2Y2QwYzU5MTIwN2Q3NjBiMzYyYzA0YTM5OTNiBPoO1A==: 00:19:53.932 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.932 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:53.932 19:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.932 19:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.932 19:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.932 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:53.932 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:53.932 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:54.194 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:19:54.194 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:54.194 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:54.194 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:54.194 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:54.194 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:54.194 19:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.194 19:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.194 19:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.194 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:54.194 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:54.454 00:19:54.715 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:54.715 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:54.715 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.715 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.715 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.715 19:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.715 19:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.715 19:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.715 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:54.715 { 00:19:54.715 "cntlid": 75, 00:19:54.715 "qid": 0, 00:19:54.715 "state": "enabled", 00:19:54.715 "listen_address": { 00:19:54.715 "trtype": "TCP", 00:19:54.715 "adrfam": "IPv4", 00:19:54.715 "traddr": "10.0.0.2", 00:19:54.715 "trsvcid": "4420" 00:19:54.715 }, 00:19:54.715 "peer_address": { 00:19:54.715 "trtype": "TCP", 00:19:54.715 "adrfam": "IPv4", 00:19:54.715 "traddr": "10.0.0.1", 00:19:54.715 "trsvcid": "45970" 00:19:54.715 }, 00:19:54.715 "auth": { 00:19:54.715 "state": "completed", 00:19:54.715 "digest": "sha384", 00:19:54.715 "dhgroup": "ffdhe4096" 00:19:54.715 } 00:19:54.715 } 00:19:54.715 ]' 00:19:54.715 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:54.976 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:54.976 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:54.976 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:54.976 19:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:54.976 19:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.976 19:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.976 19:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.236 19:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDA1ZDM4YjRjYjcyODYxODY5MTMzZGFjMmU2ZjBlMzWEOGg3: 00:19:55.809 19:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.809 19:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:55.809 19:35:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.809 19:35:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.809 19:35:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.809 19:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:55.809 19:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:55.809 19:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:56.070 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:19:56.070 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:56.070 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:56.070 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:56.070 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:56.070 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:19:56.070 19:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.070 19:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.070 19:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.070 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:56.070 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:56.331 00:19:56.331 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:56.331 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:56.331 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.592 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.592 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.592 19:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.592 19:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.592 19:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.592 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:56.592 { 00:19:56.592 "cntlid": 77, 00:19:56.592 "qid": 0, 00:19:56.592 "state": "enabled", 00:19:56.592 "listen_address": { 00:19:56.592 "trtype": "TCP", 00:19:56.592 "adrfam": "IPv4", 00:19:56.592 "traddr": "10.0.0.2", 00:19:56.592 "trsvcid": "4420" 00:19:56.592 }, 00:19:56.592 "peer_address": { 00:19:56.592 "trtype": "TCP", 00:19:56.592 "adrfam": "IPv4", 00:19:56.592 "traddr": "10.0.0.1", 00:19:56.592 "trsvcid": "46000" 00:19:56.592 }, 00:19:56.592 "auth": { 00:19:56.592 "state": "completed", 00:19:56.592 "digest": "sha384", 00:19:56.592 "dhgroup": "ffdhe4096" 00:19:56.592 } 00:19:56.592 } 00:19:56.592 ]' 00:19:56.592 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:56.592 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.592 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:56.592 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:56.592 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:56.853 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.853 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.853 19:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.853 19:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZWIyZjg3NjQ5ODNhNGQyOWI2NmRmNDY0MzBiNzViMTMxMjU1Y2JmYmNlZTc1OTcxvm1qIQ==: 00:19:57.794 19:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.794 19:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:57.794 19:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.794 19:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.794 19:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.794 19:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:57.794 19:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:57.794 19:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:58.054 19:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:19:58.054 19:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:58.054 19:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:58.054 19:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:58.054 19:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:58.054 19:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:58.054 19:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.054 19:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.054 19:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.055 19:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.055 19:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.314 00:19:58.314 19:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:58.314 19:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:58.314 19:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.575 19:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.575 19:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.575 19:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.575 19:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.575 19:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.575 19:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:58.575 { 00:19:58.575 "cntlid": 79, 00:19:58.575 "qid": 0, 00:19:58.575 "state": "enabled", 00:19:58.575 "listen_address": { 00:19:58.575 "trtype": "TCP", 00:19:58.575 "adrfam": "IPv4", 00:19:58.575 "traddr": "10.0.0.2", 00:19:58.575 "trsvcid": "4420" 00:19:58.575 }, 00:19:58.575 "peer_address": { 00:19:58.575 "trtype": "TCP", 00:19:58.575 "adrfam": "IPv4", 00:19:58.575 "traddr": "10.0.0.1", 00:19:58.575 "trsvcid": "46038" 00:19:58.575 }, 00:19:58.575 "auth": { 00:19:58.575 "state": "completed", 00:19:58.575 "digest": "sha384", 00:19:58.575 "dhgroup": "ffdhe4096" 00:19:58.575 } 00:19:58.575 } 00:19:58.575 ]' 00:19:58.575 19:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:58.575 19:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.575 19:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:58.575 19:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:58.575 19:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:58.575 19:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.575 19:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.575 19:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.836 19:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:YTI0MmFmM2IxMzE3NTYxNjZjYjkyYzFhNDcwMmIzZTAyOWNkOTdjNDEzMDU0NjBiYjZkNmI2NmY2NTA0ZjIyZq7CFVo=: 00:19:59.780 19:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.780 19:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:59.780 19:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.780 19:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.780 19:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.780 19:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.780 19:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:59.780 19:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:59.780 19:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:59.780 19:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:19:59.780 19:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:59.780 19:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:59.780 19:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:59.780 19:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:59.780 19:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:19:59.780 19:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.780 19:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.780 19:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.780 19:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:59.780 19:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:00.351 00:20:00.351 19:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:00.351 19:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:00.351 19:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.351 19:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.351 19:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.351 19:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.351 19:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.351 19:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.612 19:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:00.612 { 00:20:00.612 "cntlid": 81, 00:20:00.612 "qid": 0, 00:20:00.612 "state": "enabled", 00:20:00.612 "listen_address": { 00:20:00.612 "trtype": "TCP", 00:20:00.612 "adrfam": "IPv4", 00:20:00.612 "traddr": "10.0.0.2", 00:20:00.612 "trsvcid": "4420" 00:20:00.612 }, 00:20:00.612 "peer_address": { 00:20:00.612 "trtype": "TCP", 00:20:00.612 "adrfam": "IPv4", 00:20:00.612 "traddr": "10.0.0.1", 00:20:00.612 "trsvcid": "46060" 00:20:00.612 }, 00:20:00.612 "auth": { 00:20:00.612 "state": "completed", 00:20:00.612 "digest": "sha384", 00:20:00.612 "dhgroup": "ffdhe6144" 00:20:00.612 } 00:20:00.612 } 00:20:00.612 ]' 00:20:00.612 19:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:00.612 19:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.612 19:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:00.612 19:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:00.612 19:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:00.612 19:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.612 19:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.612 19:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.873 19:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZWE5M2FjMmM5NzE0ZThjOGQ3YWE2Y2QwYzU5MTIwN2Q3NjBiMzYyYzA0YTM5OTNiBPoO1A==: 00:20:01.444 19:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.444 19:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:01.444 19:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.444 19:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.705 19:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.705 19:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:01.705 19:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:01.705 19:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:01.705 19:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:20:01.705 19:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:01.705 19:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:01.705 19:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:01.705 19:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:01.705 19:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:01.705 19:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.705 19:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.705 19:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.705 19:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:01.705 19:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:02.276 00:20:02.276 19:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:02.276 19:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:02.276 19:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.537 19:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.537 19:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.537 19:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.537 19:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.537 19:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.537 19:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:02.537 { 00:20:02.537 "cntlid": 83, 00:20:02.537 "qid": 0, 00:20:02.537 "state": "enabled", 00:20:02.537 "listen_address": { 00:20:02.537 "trtype": "TCP", 00:20:02.537 "adrfam": "IPv4", 00:20:02.537 "traddr": "10.0.0.2", 00:20:02.537 "trsvcid": "4420" 00:20:02.537 }, 00:20:02.537 "peer_address": { 00:20:02.537 "trtype": "TCP", 00:20:02.537 "adrfam": "IPv4", 00:20:02.537 "traddr": "10.0.0.1", 00:20:02.537 "trsvcid": "46100" 00:20:02.537 }, 00:20:02.537 "auth": { 00:20:02.537 "state": "completed", 00:20:02.538 "digest": "sha384", 00:20:02.538 "dhgroup": "ffdhe6144" 00:20:02.538 } 00:20:02.538 } 00:20:02.538 ]' 00:20:02.538 19:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:02.538 19:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.538 19:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:02.538 19:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:02.538 19:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:02.538 19:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.538 19:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.538 19:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.798 19:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDA1ZDM4YjRjYjcyODYxODY5MTMzZGFjMmU2ZjBlMzWEOGg3: 00:20:03.370 19:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.631 19:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:03.631 19:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.631 19:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.631 19:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.631 19:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:03.631 19:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:03.631 19:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:03.631 19:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:20:03.631 19:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:03.631 19:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:03.631 19:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:03.631 19:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:03.631 19:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:20:03.631 19:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.631 19:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.631 19:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.631 19:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:03.631 19:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:04.203 00:20:04.203 19:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:04.203 19:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.203 19:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:04.464 19:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.464 19:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.464 19:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.464 19:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.464 19:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.464 19:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:04.464 { 00:20:04.464 "cntlid": 85, 00:20:04.464 "qid": 0, 00:20:04.464 "state": "enabled", 00:20:04.464 "listen_address": { 00:20:04.464 "trtype": "TCP", 00:20:04.464 "adrfam": "IPv4", 00:20:04.464 "traddr": "10.0.0.2", 00:20:04.464 "trsvcid": "4420" 00:20:04.464 }, 00:20:04.464 "peer_address": { 00:20:04.464 "trtype": "TCP", 00:20:04.464 "adrfam": "IPv4", 00:20:04.464 "traddr": "10.0.0.1", 00:20:04.464 "trsvcid": "55512" 00:20:04.464 }, 00:20:04.464 "auth": { 00:20:04.464 "state": "completed", 00:20:04.464 "digest": "sha384", 00:20:04.464 "dhgroup": "ffdhe6144" 00:20:04.464 } 00:20:04.464 } 00:20:04.464 ]' 00:20:04.464 19:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:04.464 19:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.464 19:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:04.464 19:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:04.464 19:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:04.464 19:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.464 19:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.464 19:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.726 19:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZWIyZjg3NjQ5ODNhNGQyOWI2NmRmNDY0MzBiNzViMTMxMjU1Y2JmYmNlZTc1OTcxvm1qIQ==: 00:20:05.666 19:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.666 19:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:05.666 19:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.666 19:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.666 19:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.666 19:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:05.666 19:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:05.666 19:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:05.666 19:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:20:05.666 19:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:05.666 19:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:05.666 19:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:05.666 19:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:05.666 19:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:05.666 19:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.666 19:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.666 19:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.666 19:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.666 19:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.238 00:20:06.238 19:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:06.238 19:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.238 19:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:06.238 19:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.238 19:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.239 19:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.239 19:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.239 19:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.239 19:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:06.239 { 00:20:06.239 "cntlid": 87, 00:20:06.239 "qid": 0, 00:20:06.239 "state": "enabled", 00:20:06.239 "listen_address": { 00:20:06.239 "trtype": "TCP", 00:20:06.239 "adrfam": "IPv4", 00:20:06.239 "traddr": "10.0.0.2", 00:20:06.239 "trsvcid": "4420" 00:20:06.239 }, 00:20:06.239 "peer_address": { 00:20:06.239 "trtype": "TCP", 00:20:06.239 "adrfam": "IPv4", 00:20:06.239 "traddr": "10.0.0.1", 00:20:06.239 "trsvcid": "55544" 00:20:06.239 }, 00:20:06.239 "auth": { 00:20:06.239 "state": "completed", 00:20:06.239 "digest": "sha384", 00:20:06.239 "dhgroup": "ffdhe6144" 00:20:06.239 } 00:20:06.239 } 00:20:06.239 ]' 00:20:06.239 19:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:06.500 19:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.500 19:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:06.500 19:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:06.500 19:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:06.500 19:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.500 19:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.500 19:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.761 19:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:YTI0MmFmM2IxMzE3NTYxNjZjYjkyYzFhNDcwMmIzZTAyOWNkOTdjNDEzMDU0NjBiYjZkNmI2NmY2NTA0ZjIyZq7CFVo=: 00:20:07.332 19:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.332 19:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:07.332 19:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.332 19:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.332 19:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.332 19:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.332 19:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:07.332 19:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:07.332 19:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:07.594 19:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:20:07.594 19:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:07.594 19:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:07.594 19:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:07.594 19:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:07.594 19:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:20:07.594 19:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.594 19:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.594 19:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.594 19:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:07.594 19:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:08.165 00:20:08.426 19:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:08.426 19:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:08.426 19:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.426 19:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.426 19:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.426 19:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.426 19:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.426 19:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.426 19:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:08.426 { 00:20:08.426 "cntlid": 89, 00:20:08.426 "qid": 0, 00:20:08.426 "state": "enabled", 00:20:08.426 "listen_address": { 00:20:08.426 "trtype": "TCP", 00:20:08.426 "adrfam": "IPv4", 00:20:08.426 "traddr": "10.0.0.2", 00:20:08.426 "trsvcid": "4420" 00:20:08.426 }, 00:20:08.426 "peer_address": { 00:20:08.426 "trtype": "TCP", 00:20:08.426 "adrfam": "IPv4", 00:20:08.426 "traddr": "10.0.0.1", 00:20:08.426 "trsvcid": "55570" 00:20:08.426 }, 00:20:08.426 "auth": { 00:20:08.426 "state": "completed", 00:20:08.426 "digest": "sha384", 00:20:08.426 "dhgroup": "ffdhe8192" 00:20:08.426 } 00:20:08.426 } 00:20:08.426 ]' 00:20:08.426 19:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:08.687 19:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.687 19:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:08.687 19:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:08.687 19:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:08.687 19:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.687 19:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.687 19:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.961 19:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZWE5M2FjMmM5NzE0ZThjOGQ3YWE2Y2QwYzU5MTIwN2Q3NjBiMzYyYzA0YTM5OTNiBPoO1A==: 00:20:09.538 19:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.538 19:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:09.538 19:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.538 19:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.538 19:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.538 19:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:09.538 19:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:09.538 19:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:09.799 19:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:20:09.799 19:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:09.799 19:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:09.799 19:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:09.799 19:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:09.799 19:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:09.799 19:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.799 19:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.799 19:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.799 19:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:09.799 19:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:10.370 00:20:10.370 19:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:10.370 19:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:10.370 19:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.632 19:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.632 19:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.632 19:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.632 19:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.632 19:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.632 19:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:10.632 { 00:20:10.632 "cntlid": 91, 00:20:10.632 "qid": 0, 00:20:10.632 "state": "enabled", 00:20:10.632 "listen_address": { 00:20:10.632 "trtype": "TCP", 00:20:10.632 "adrfam": "IPv4", 00:20:10.632 "traddr": "10.0.0.2", 00:20:10.632 "trsvcid": "4420" 00:20:10.632 }, 00:20:10.632 "peer_address": { 00:20:10.632 "trtype": "TCP", 00:20:10.632 "adrfam": "IPv4", 00:20:10.632 "traddr": "10.0.0.1", 00:20:10.632 "trsvcid": "55610" 00:20:10.632 }, 00:20:10.632 "auth": { 00:20:10.632 "state": "completed", 00:20:10.632 "digest": "sha384", 00:20:10.632 "dhgroup": "ffdhe8192" 00:20:10.632 } 00:20:10.632 } 00:20:10.632 ]' 00:20:10.632 19:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:10.632 19:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.632 19:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:10.893 19:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:10.893 19:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:10.893 19:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.893 19:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.893 19:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.154 19:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDA1ZDM4YjRjYjcyODYxODY5MTMzZGFjMmU2ZjBlMzWEOGg3: 00:20:11.725 19:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.725 19:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:11.725 19:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.725 19:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.725 19:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.725 19:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:11.725 19:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:11.725 19:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:11.985 19:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:20:11.985 19:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:11.985 19:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:11.985 19:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:11.985 19:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:11.985 19:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:20:11.985 19:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.985 19:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.985 19:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.985 19:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:11.985 19:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:12.556 00:20:12.556 19:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:12.556 19:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.556 19:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:12.817 19:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.817 19:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.817 19:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.817 19:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.817 19:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.817 19:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:12.817 { 00:20:12.817 "cntlid": 93, 00:20:12.817 "qid": 0, 00:20:12.817 "state": "enabled", 00:20:12.817 "listen_address": { 00:20:12.817 "trtype": "TCP", 00:20:12.817 "adrfam": "IPv4", 00:20:12.817 "traddr": "10.0.0.2", 00:20:12.817 "trsvcid": "4420" 00:20:12.817 }, 00:20:12.817 "peer_address": { 00:20:12.817 "trtype": "TCP", 00:20:12.817 "adrfam": "IPv4", 00:20:12.817 "traddr": "10.0.0.1", 00:20:12.817 "trsvcid": "55636" 00:20:12.817 }, 00:20:12.817 "auth": { 00:20:12.817 "state": "completed", 00:20:12.817 "digest": "sha384", 00:20:12.817 "dhgroup": "ffdhe8192" 00:20:12.817 } 00:20:12.817 } 00:20:12.817 ]' 00:20:12.817 19:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:12.817 19:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.817 19:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:13.078 19:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:13.078 19:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:13.078 19:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.078 19:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.078 19:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.078 19:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZWIyZjg3NjQ5ODNhNGQyOWI2NmRmNDY0MzBiNzViMTMxMjU1Y2JmYmNlZTc1OTcxvm1qIQ==: 00:20:14.019 19:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.019 19:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:14.019 19:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.019 19:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.019 19:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.019 19:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:14.019 19:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:14.020 19:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:14.020 19:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:20:14.020 19:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:14.020 19:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:14.020 19:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:14.020 19:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:14.020 19:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:14.020 19:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.020 19:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.020 19:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.020 19:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.020 19:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.962 00:20:14.962 19:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:14.962 19:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.962 19:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:14.962 19:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.962 19:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.962 19:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.962 19:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.962 19:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.962 19:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:14.962 { 00:20:14.962 "cntlid": 95, 00:20:14.962 "qid": 0, 00:20:14.962 "state": "enabled", 00:20:14.962 "listen_address": { 00:20:14.962 "trtype": "TCP", 00:20:14.962 "adrfam": "IPv4", 00:20:14.962 "traddr": "10.0.0.2", 00:20:14.962 "trsvcid": "4420" 00:20:14.962 }, 00:20:14.962 "peer_address": { 00:20:14.962 "trtype": "TCP", 00:20:14.962 "adrfam": "IPv4", 00:20:14.962 "traddr": "10.0.0.1", 00:20:14.962 "trsvcid": "41468" 00:20:14.962 }, 00:20:14.962 "auth": { 00:20:14.962 "state": "completed", 00:20:14.962 "digest": "sha384", 00:20:14.962 "dhgroup": "ffdhe8192" 00:20:14.962 } 00:20:14.962 } 00:20:14.962 ]' 00:20:14.962 19:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:14.962 19:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.962 19:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:14.962 19:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:14.962 19:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:15.223 19:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.223 19:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.223 19:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.223 19:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:YTI0MmFmM2IxMzE3NTYxNjZjYjkyYzFhNDcwMmIzZTAyOWNkOTdjNDEzMDU0NjBiYjZkNmI2NmY2NTA0ZjIyZq7CFVo=: 00:20:16.166 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.167 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:16.167 19:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.167 19:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.167 19:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.167 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:20:16.167 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.167 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:16.167 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:16.167 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:16.167 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:20:16.167 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:16.167 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:16.167 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:16.167 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:16.167 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:20:16.167 19:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.167 19:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.427 19:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.427 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:16.427 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:16.427 00:20:16.689 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:16.689 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:16.689 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.689 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.689 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.689 19:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.689 19:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.689 19:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.689 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:16.689 { 00:20:16.689 "cntlid": 97, 00:20:16.689 "qid": 0, 00:20:16.689 "state": "enabled", 00:20:16.689 "listen_address": { 00:20:16.689 "trtype": "TCP", 00:20:16.689 "adrfam": "IPv4", 00:20:16.689 "traddr": "10.0.0.2", 00:20:16.689 "trsvcid": "4420" 00:20:16.689 }, 00:20:16.689 "peer_address": { 00:20:16.689 "trtype": "TCP", 00:20:16.689 "adrfam": "IPv4", 00:20:16.689 "traddr": "10.0.0.1", 00:20:16.689 "trsvcid": "41492" 00:20:16.689 }, 00:20:16.689 "auth": { 00:20:16.689 "state": "completed", 00:20:16.689 "digest": "sha512", 00:20:16.689 "dhgroup": "null" 00:20:16.689 } 00:20:16.689 } 00:20:16.689 ]' 00:20:16.689 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:16.949 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:16.949 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:16.949 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:16.949 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:16.949 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.949 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.949 19:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.209 19:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZWE5M2FjMmM5NzE0ZThjOGQ3YWE2Y2QwYzU5MTIwN2Q3NjBiMzYyYzA0YTM5OTNiBPoO1A==: 00:20:17.782 19:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.782 19:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:17.782 19:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.782 19:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.782 19:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.782 19:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:17.782 19:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:17.782 19:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:18.042 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:20:18.042 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:18.042 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:18.042 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:18.042 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:18.042 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:18.042 19:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.042 19:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.042 19:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.042 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:18.042 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:18.303 00:20:18.303 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:18.303 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:18.303 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.565 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.565 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.565 19:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.565 19:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.565 19:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.565 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:18.565 { 00:20:18.565 "cntlid": 99, 00:20:18.565 "qid": 0, 00:20:18.565 "state": "enabled", 00:20:18.565 "listen_address": { 00:20:18.565 "trtype": "TCP", 00:20:18.565 "adrfam": "IPv4", 00:20:18.565 "traddr": "10.0.0.2", 00:20:18.565 "trsvcid": "4420" 00:20:18.565 }, 00:20:18.565 "peer_address": { 00:20:18.565 "trtype": "TCP", 00:20:18.565 "adrfam": "IPv4", 00:20:18.565 "traddr": "10.0.0.1", 00:20:18.565 "trsvcid": "41526" 00:20:18.565 }, 00:20:18.565 "auth": { 00:20:18.565 "state": "completed", 00:20:18.565 "digest": "sha512", 00:20:18.565 "dhgroup": "null" 00:20:18.565 } 00:20:18.565 } 00:20:18.565 ]' 00:20:18.565 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:18.565 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:18.565 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:18.825 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:18.825 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:18.825 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.825 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.825 19:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.088 19:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDA1ZDM4YjRjYjcyODYxODY5MTMzZGFjMmU2ZjBlMzWEOGg3: 00:20:19.658 19:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.658 19:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:19.658 19:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.658 19:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.658 19:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.658 19:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:19.658 19:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:19.658 19:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:19.920 19:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:20:19.920 19:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:19.920 19:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:19.920 19:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:19.920 19:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:19.920 19:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:20:19.920 19:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.920 19:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.920 19:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.920 19:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:19.920 19:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:20.181 00:20:20.181 19:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:20.181 19:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.181 19:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:20.442 19:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.442 19:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.442 19:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.442 19:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.442 19:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.442 19:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:20.442 { 00:20:20.442 "cntlid": 101, 00:20:20.442 "qid": 0, 00:20:20.442 "state": "enabled", 00:20:20.442 "listen_address": { 00:20:20.442 "trtype": "TCP", 00:20:20.442 "adrfam": "IPv4", 00:20:20.442 "traddr": "10.0.0.2", 00:20:20.442 "trsvcid": "4420" 00:20:20.442 }, 00:20:20.442 "peer_address": { 00:20:20.442 "trtype": "TCP", 00:20:20.442 "adrfam": "IPv4", 00:20:20.442 "traddr": "10.0.0.1", 00:20:20.442 "trsvcid": "41558" 00:20:20.442 }, 00:20:20.442 "auth": { 00:20:20.442 "state": "completed", 00:20:20.442 "digest": "sha512", 00:20:20.442 "dhgroup": "null" 00:20:20.442 } 00:20:20.442 } 00:20:20.442 ]' 00:20:20.442 19:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:20.442 19:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:20.442 19:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:20.442 19:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:20.442 19:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:20.442 19:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.442 19:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.442 19:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.704 19:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZWIyZjg3NjQ5ODNhNGQyOWI2NmRmNDY0MzBiNzViMTMxMjU1Y2JmYmNlZTc1OTcxvm1qIQ==: 00:20:21.276 19:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.276 19:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:21.276 19:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.276 19:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.276 19:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.276 19:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:21.276 19:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:21.276 19:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:21.536 19:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:20:21.536 19:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:21.536 19:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:21.536 19:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:21.536 19:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:21.536 19:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:21.536 19:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.536 19:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.537 19:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.537 19:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.537 19:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.797 00:20:21.797 19:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:21.797 19:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:21.797 19:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.059 19:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.059 19:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.059 19:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.059 19:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.059 19:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.059 19:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:22.059 { 00:20:22.059 "cntlid": 103, 00:20:22.059 "qid": 0, 00:20:22.059 "state": "enabled", 00:20:22.059 "listen_address": { 00:20:22.059 "trtype": "TCP", 00:20:22.059 "adrfam": "IPv4", 00:20:22.059 "traddr": "10.0.0.2", 00:20:22.059 "trsvcid": "4420" 00:20:22.059 }, 00:20:22.059 "peer_address": { 00:20:22.059 "trtype": "TCP", 00:20:22.059 "adrfam": "IPv4", 00:20:22.059 "traddr": "10.0.0.1", 00:20:22.059 "trsvcid": "41578" 00:20:22.059 }, 00:20:22.059 "auth": { 00:20:22.059 "state": "completed", 00:20:22.059 "digest": "sha512", 00:20:22.059 "dhgroup": "null" 00:20:22.059 } 00:20:22.059 } 00:20:22.059 ]' 00:20:22.059 19:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:22.059 19:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:22.059 19:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:22.059 19:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:22.059 19:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:22.319 19:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.319 19:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.319 19:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.319 19:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:YTI0MmFmM2IxMzE3NTYxNjZjYjkyYzFhNDcwMmIzZTAyOWNkOTdjNDEzMDU0NjBiYjZkNmI2NmY2NTA0ZjIyZq7CFVo=: 00:20:23.258 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.258 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:23.258 19:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.258 19:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.258 19:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.258 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.258 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:23.258 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:23.258 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:23.258 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:20:23.258 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:23.258 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:23.258 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:23.258 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:23.258 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:20:23.258 19:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.258 19:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.518 19:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.518 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:23.518 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:23.518 00:20:23.779 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:23.779 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:23.779 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.779 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.779 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.779 19:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.779 19:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.779 19:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.779 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:23.779 { 00:20:23.779 "cntlid": 105, 00:20:23.779 "qid": 0, 00:20:23.779 "state": "enabled", 00:20:23.779 "listen_address": { 00:20:23.779 "trtype": "TCP", 00:20:23.779 "adrfam": "IPv4", 00:20:23.779 "traddr": "10.0.0.2", 00:20:23.779 "trsvcid": "4420" 00:20:23.779 }, 00:20:23.779 "peer_address": { 00:20:23.779 "trtype": "TCP", 00:20:23.779 "adrfam": "IPv4", 00:20:23.779 "traddr": "10.0.0.1", 00:20:23.779 "trsvcid": "46502" 00:20:23.779 }, 00:20:23.779 "auth": { 00:20:23.779 "state": "completed", 00:20:23.779 "digest": "sha512", 00:20:23.779 "dhgroup": "ffdhe2048" 00:20:23.779 } 00:20:23.779 } 00:20:23.779 ]' 00:20:23.779 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:24.041 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:24.041 19:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:24.041 19:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:24.041 19:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:24.041 19:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.041 19:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.041 19:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.301 19:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZWE5M2FjMmM5NzE0ZThjOGQ3YWE2Y2QwYzU5MTIwN2Q3NjBiMzYyYzA0YTM5OTNiBPoO1A==: 00:20:24.872 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.132 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:25.132 19:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.132 19:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.132 19:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.132 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:25.132 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:25.132 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:25.132 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:20:25.132 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:25.132 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:25.132 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:25.132 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:25.132 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:25.132 19:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.132 19:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.132 19:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.132 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:25.132 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:25.392 00:20:25.392 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:25.392 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:25.392 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.652 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.652 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.652 19:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.652 19:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.652 19:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.652 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:25.652 { 00:20:25.652 "cntlid": 107, 00:20:25.652 "qid": 0, 00:20:25.652 "state": "enabled", 00:20:25.652 "listen_address": { 00:20:25.652 "trtype": "TCP", 00:20:25.652 "adrfam": "IPv4", 00:20:25.652 "traddr": "10.0.0.2", 00:20:25.652 "trsvcid": "4420" 00:20:25.652 }, 00:20:25.652 "peer_address": { 00:20:25.652 "trtype": "TCP", 00:20:25.652 "adrfam": "IPv4", 00:20:25.652 "traddr": "10.0.0.1", 00:20:25.652 "trsvcid": "46526" 00:20:25.652 }, 00:20:25.652 "auth": { 00:20:25.652 "state": "completed", 00:20:25.652 "digest": "sha512", 00:20:25.652 "dhgroup": "ffdhe2048" 00:20:25.652 } 00:20:25.652 } 00:20:25.652 ]' 00:20:25.652 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:25.652 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.652 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:25.912 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:25.912 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:25.912 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.912 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.912 19:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.172 19:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDA1ZDM4YjRjYjcyODYxODY5MTMzZGFjMmU2ZjBlMzWEOGg3: 00:20:26.743 19:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.743 19:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:26.743 19:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.743 19:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.743 19:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.743 19:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:26.744 19:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:26.744 19:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:27.004 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:20:27.004 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:27.004 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:27.004 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:27.004 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:27.004 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:20:27.004 19:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.004 19:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.004 19:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.004 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:27.004 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:27.264 00:20:27.264 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:27.264 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:27.264 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.526 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.526 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.526 19:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.526 19:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.526 19:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.526 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:27.526 { 00:20:27.526 "cntlid": 109, 00:20:27.526 "qid": 0, 00:20:27.526 "state": "enabled", 00:20:27.526 "listen_address": { 00:20:27.526 "trtype": "TCP", 00:20:27.526 "adrfam": "IPv4", 00:20:27.526 "traddr": "10.0.0.2", 00:20:27.526 "trsvcid": "4420" 00:20:27.526 }, 00:20:27.526 "peer_address": { 00:20:27.526 "trtype": "TCP", 00:20:27.526 "adrfam": "IPv4", 00:20:27.526 "traddr": "10.0.0.1", 00:20:27.526 "trsvcid": "46546" 00:20:27.526 }, 00:20:27.526 "auth": { 00:20:27.526 "state": "completed", 00:20:27.526 "digest": "sha512", 00:20:27.526 "dhgroup": "ffdhe2048" 00:20:27.526 } 00:20:27.526 } 00:20:27.526 ]' 00:20:27.526 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:27.526 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:27.526 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:27.526 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:27.526 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:27.787 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.787 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.787 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.787 19:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZWIyZjg3NjQ5ODNhNGQyOWI2NmRmNDY0MzBiNzViMTMxMjU1Y2JmYmNlZTc1OTcxvm1qIQ==: 00:20:28.729 19:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.729 19:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:28.729 19:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.729 19:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.729 19:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.729 19:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:28.729 19:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:28.729 19:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:28.729 19:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:20:28.729 19:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:28.729 19:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:28.729 19:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:28.729 19:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:28.729 19:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:28.729 19:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.729 19:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.990 19:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.990 19:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:28.990 19:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.250 00:20:29.250 19:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:29.250 19:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:29.250 19:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.250 19:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.250 19:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.250 19:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.250 19:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.250 19:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.250 19:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:29.250 { 00:20:29.250 "cntlid": 111, 00:20:29.250 "qid": 0, 00:20:29.250 "state": "enabled", 00:20:29.250 "listen_address": { 00:20:29.250 "trtype": "TCP", 00:20:29.250 "adrfam": "IPv4", 00:20:29.250 "traddr": "10.0.0.2", 00:20:29.250 "trsvcid": "4420" 00:20:29.250 }, 00:20:29.250 "peer_address": { 00:20:29.250 "trtype": "TCP", 00:20:29.250 "adrfam": "IPv4", 00:20:29.250 "traddr": "10.0.0.1", 00:20:29.250 "trsvcid": "46572" 00:20:29.250 }, 00:20:29.250 "auth": { 00:20:29.250 "state": "completed", 00:20:29.250 "digest": "sha512", 00:20:29.250 "dhgroup": "ffdhe2048" 00:20:29.250 } 00:20:29.250 } 00:20:29.250 ]' 00:20:29.250 19:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:29.511 19:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:29.511 19:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:29.511 19:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:29.511 19:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:29.511 19:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.511 19:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.511 19:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.771 19:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:YTI0MmFmM2IxMzE3NTYxNjZjYjkyYzFhNDcwMmIzZTAyOWNkOTdjNDEzMDU0NjBiYjZkNmI2NmY2NTA0ZjIyZq7CFVo=: 00:20:30.341 19:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.602 19:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:30.602 19:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.602 19:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.602 19:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.602 19:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.602 19:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:30.602 19:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:30.602 19:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:30.602 19:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:20:30.602 19:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:30.602 19:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:30.602 19:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:30.602 19:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:30.602 19:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:20:30.602 19:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.602 19:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.602 19:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.602 19:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:30.602 19:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:30.862 00:20:30.862 19:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:30.862 19:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:30.862 19:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.123 19:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.123 19:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.123 19:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.123 19:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.123 19:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.123 19:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:31.123 { 00:20:31.123 "cntlid": 113, 00:20:31.123 "qid": 0, 00:20:31.123 "state": "enabled", 00:20:31.123 "listen_address": { 00:20:31.123 "trtype": "TCP", 00:20:31.123 "adrfam": "IPv4", 00:20:31.123 "traddr": "10.0.0.2", 00:20:31.123 "trsvcid": "4420" 00:20:31.123 }, 00:20:31.123 "peer_address": { 00:20:31.123 "trtype": "TCP", 00:20:31.123 "adrfam": "IPv4", 00:20:31.123 "traddr": "10.0.0.1", 00:20:31.123 "trsvcid": "46598" 00:20:31.123 }, 00:20:31.123 "auth": { 00:20:31.123 "state": "completed", 00:20:31.123 "digest": "sha512", 00:20:31.123 "dhgroup": "ffdhe3072" 00:20:31.123 } 00:20:31.123 } 00:20:31.123 ]' 00:20:31.123 19:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:31.384 19:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.384 19:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:31.384 19:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:31.384 19:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:31.384 19:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.384 19:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.384 19:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.645 19:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZWE5M2FjMmM5NzE0ZThjOGQ3YWE2Y2QwYzU5MTIwN2Q3NjBiMzYyYzA0YTM5OTNiBPoO1A==: 00:20:32.217 19:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.217 19:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:32.217 19:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.217 19:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.217 19:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.217 19:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:32.217 19:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:32.217 19:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:32.478 19:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:20:32.478 19:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:32.478 19:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:32.478 19:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:32.478 19:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:32.478 19:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:32.478 19:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.478 19:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.478 19:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.478 19:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:32.478 19:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:32.738 00:20:32.738 19:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:32.738 19:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:32.738 19:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.999 19:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.999 19:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.999 19:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.999 19:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.999 19:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.999 19:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:32.999 { 00:20:32.999 "cntlid": 115, 00:20:32.999 "qid": 0, 00:20:32.999 "state": "enabled", 00:20:32.999 "listen_address": { 00:20:32.999 "trtype": "TCP", 00:20:32.999 "adrfam": "IPv4", 00:20:32.999 "traddr": "10.0.0.2", 00:20:32.999 "trsvcid": "4420" 00:20:32.999 }, 00:20:32.999 "peer_address": { 00:20:32.999 "trtype": "TCP", 00:20:32.999 "adrfam": "IPv4", 00:20:32.999 "traddr": "10.0.0.1", 00:20:32.999 "trsvcid": "46630" 00:20:32.999 }, 00:20:32.999 "auth": { 00:20:32.999 "state": "completed", 00:20:32.999 "digest": "sha512", 00:20:32.999 "dhgroup": "ffdhe3072" 00:20:32.999 } 00:20:32.999 } 00:20:32.999 ]' 00:20:32.999 19:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:32.999 19:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:32.999 19:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:33.260 19:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:33.260 19:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:33.260 19:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.260 19:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.260 19:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.520 19:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDA1ZDM4YjRjYjcyODYxODY5MTMzZGFjMmU2ZjBlMzWEOGg3: 00:20:34.110 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.110 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:34.110 19:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.110 19:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.110 19:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.110 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:34.110 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:34.110 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:34.377 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:20:34.377 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:34.377 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:34.377 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:34.377 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:34.377 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:20:34.377 19:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.377 19:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.377 19:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.377 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:34.377 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:34.638 00:20:34.638 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:34.638 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:34.638 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.899 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.899 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.899 19:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.899 19:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.899 19:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.899 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:34.899 { 00:20:34.899 "cntlid": 117, 00:20:34.899 "qid": 0, 00:20:34.899 "state": "enabled", 00:20:34.899 "listen_address": { 00:20:34.899 "trtype": "TCP", 00:20:34.899 "adrfam": "IPv4", 00:20:34.899 "traddr": "10.0.0.2", 00:20:34.899 "trsvcid": "4420" 00:20:34.899 }, 00:20:34.899 "peer_address": { 00:20:34.899 "trtype": "TCP", 00:20:34.899 "adrfam": "IPv4", 00:20:34.899 "traddr": "10.0.0.1", 00:20:34.899 "trsvcid": "55926" 00:20:34.899 }, 00:20:34.899 "auth": { 00:20:34.899 "state": "completed", 00:20:34.899 "digest": "sha512", 00:20:34.899 "dhgroup": "ffdhe3072" 00:20:34.899 } 00:20:34.899 } 00:20:34.899 ]' 00:20:34.899 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:34.899 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:34.899 19:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:34.899 19:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:34.899 19:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:34.899 19:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.160 19:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.160 19:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.160 19:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZWIyZjg3NjQ5ODNhNGQyOWI2NmRmNDY0MzBiNzViMTMxMjU1Y2JmYmNlZTc1OTcxvm1qIQ==: 00:20:36.184 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.184 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:36.184 19:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.184 19:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.184 19:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.184 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:36.184 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:36.184 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:36.184 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:20:36.184 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:36.184 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:36.184 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:36.184 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:36.184 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:36.184 19:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.184 19:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.184 19:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.184 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.184 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.445 00:20:36.445 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:36.445 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:36.445 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.706 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.706 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.706 19:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.706 19:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.706 19:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.706 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:36.706 { 00:20:36.706 "cntlid": 119, 00:20:36.706 "qid": 0, 00:20:36.706 "state": "enabled", 00:20:36.706 "listen_address": { 00:20:36.706 "trtype": "TCP", 00:20:36.706 "adrfam": "IPv4", 00:20:36.706 "traddr": "10.0.0.2", 00:20:36.706 "trsvcid": "4420" 00:20:36.706 }, 00:20:36.706 "peer_address": { 00:20:36.706 "trtype": "TCP", 00:20:36.706 "adrfam": "IPv4", 00:20:36.706 "traddr": "10.0.0.1", 00:20:36.706 "trsvcid": "55962" 00:20:36.706 }, 00:20:36.706 "auth": { 00:20:36.706 "state": "completed", 00:20:36.706 "digest": "sha512", 00:20:36.706 "dhgroup": "ffdhe3072" 00:20:36.706 } 00:20:36.706 } 00:20:36.706 ]' 00:20:36.706 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:36.706 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:36.706 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:36.965 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:36.965 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:36.965 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.965 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.965 19:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.225 19:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:YTI0MmFmM2IxMzE3NTYxNjZjYjkyYzFhNDcwMmIzZTAyOWNkOTdjNDEzMDU0NjBiYjZkNmI2NmY2NTA0ZjIyZq7CFVo=: 00:20:37.796 19:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.796 19:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:37.796 19:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.796 19:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.796 19:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.796 19:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.796 19:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:37.796 19:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:37.796 19:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:38.056 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:20:38.056 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:38.056 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:38.056 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:38.056 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:38.056 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:20:38.056 19:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.056 19:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.056 19:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.056 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:38.056 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:38.317 00:20:38.317 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:38.317 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:38.317 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.578 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.578 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.578 19:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.578 19:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.578 19:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.578 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:38.578 { 00:20:38.578 "cntlid": 121, 00:20:38.578 "qid": 0, 00:20:38.578 "state": "enabled", 00:20:38.578 "listen_address": { 00:20:38.578 "trtype": "TCP", 00:20:38.578 "adrfam": "IPv4", 00:20:38.578 "traddr": "10.0.0.2", 00:20:38.578 "trsvcid": "4420" 00:20:38.578 }, 00:20:38.578 "peer_address": { 00:20:38.578 "trtype": "TCP", 00:20:38.578 "adrfam": "IPv4", 00:20:38.578 "traddr": "10.0.0.1", 00:20:38.578 "trsvcid": "55980" 00:20:38.578 }, 00:20:38.578 "auth": { 00:20:38.578 "state": "completed", 00:20:38.578 "digest": "sha512", 00:20:38.578 "dhgroup": "ffdhe4096" 00:20:38.578 } 00:20:38.578 } 00:20:38.578 ]' 00:20:38.578 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:38.578 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:38.578 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:38.578 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:38.578 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:38.838 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.838 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.838 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.839 19:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZWE5M2FjMmM5NzE0ZThjOGQ3YWE2Y2QwYzU5MTIwN2Q3NjBiMzYyYzA0YTM5OTNiBPoO1A==: 00:20:39.780 19:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.780 19:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:39.780 19:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.780 19:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.780 19:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.780 19:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:39.780 19:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:39.780 19:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:39.780 19:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:20:39.780 19:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:39.780 19:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:39.780 19:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:39.781 19:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:39.781 19:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:39.781 19:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.781 19:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.041 19:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.041 19:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:40.041 19:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:40.302 00:20:40.302 19:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:40.302 19:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:40.302 19:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.563 19:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.563 19:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.563 19:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.563 19:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.563 19:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.563 19:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:40.563 { 00:20:40.563 "cntlid": 123, 00:20:40.563 "qid": 0, 00:20:40.563 "state": "enabled", 00:20:40.563 "listen_address": { 00:20:40.563 "trtype": "TCP", 00:20:40.563 "adrfam": "IPv4", 00:20:40.563 "traddr": "10.0.0.2", 00:20:40.563 "trsvcid": "4420" 00:20:40.563 }, 00:20:40.563 "peer_address": { 00:20:40.563 "trtype": "TCP", 00:20:40.563 "adrfam": "IPv4", 00:20:40.563 "traddr": "10.0.0.1", 00:20:40.563 "trsvcid": "56004" 00:20:40.563 }, 00:20:40.563 "auth": { 00:20:40.563 "state": "completed", 00:20:40.563 "digest": "sha512", 00:20:40.563 "dhgroup": "ffdhe4096" 00:20:40.563 } 00:20:40.563 } 00:20:40.563 ]' 00:20:40.563 19:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:40.563 19:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:40.563 19:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:40.563 19:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:40.563 19:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:40.563 19:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.563 19:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.563 19:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.824 19:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDA1ZDM4YjRjYjcyODYxODY5MTMzZGFjMmU2ZjBlMzWEOGg3: 00:20:41.395 19:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.655 19:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:41.655 19:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.655 19:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.655 19:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.655 19:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:41.655 19:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:41.655 19:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:41.656 19:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:20:41.656 19:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:41.656 19:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:41.656 19:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:41.656 19:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:41.656 19:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:20:41.656 19:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.656 19:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.656 19:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.656 19:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:41.656 19:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:42.227 00:20:42.227 19:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:42.227 19:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:42.227 19:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.227 19:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.227 19:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.227 19:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.227 19:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.227 19:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.227 19:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:42.227 { 00:20:42.227 "cntlid": 125, 00:20:42.227 "qid": 0, 00:20:42.227 "state": "enabled", 00:20:42.227 "listen_address": { 00:20:42.227 "trtype": "TCP", 00:20:42.227 "adrfam": "IPv4", 00:20:42.227 "traddr": "10.0.0.2", 00:20:42.227 "trsvcid": "4420" 00:20:42.227 }, 00:20:42.227 "peer_address": { 00:20:42.227 "trtype": "TCP", 00:20:42.227 "adrfam": "IPv4", 00:20:42.227 "traddr": "10.0.0.1", 00:20:42.227 "trsvcid": "56038" 00:20:42.227 }, 00:20:42.227 "auth": { 00:20:42.227 "state": "completed", 00:20:42.227 "digest": "sha512", 00:20:42.227 "dhgroup": "ffdhe4096" 00:20:42.227 } 00:20:42.227 } 00:20:42.227 ]' 00:20:42.227 19:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:42.488 19:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:42.488 19:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:42.488 19:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:42.488 19:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:42.488 19:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.488 19:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.488 19:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.488 19:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZWIyZjg3NjQ5ODNhNGQyOWI2NmRmNDY0MzBiNzViMTMxMjU1Y2JmYmNlZTc1OTcxvm1qIQ==: 00:20:43.432 19:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.432 19:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:43.432 19:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.432 19:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.432 19:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.432 19:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:43.432 19:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:43.432 19:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:43.693 19:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:20:43.693 19:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:43.693 19:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:43.693 19:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:43.693 19:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:43.693 19:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:43.693 19:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.693 19:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.693 19:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.693 19:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.693 19:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.954 00:20:43.954 19:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:43.954 19:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.954 19:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:44.214 19:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.214 19:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.215 19:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.215 19:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.215 19:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.215 19:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:44.215 { 00:20:44.215 "cntlid": 127, 00:20:44.215 "qid": 0, 00:20:44.215 "state": "enabled", 00:20:44.215 "listen_address": { 00:20:44.215 "trtype": "TCP", 00:20:44.215 "adrfam": "IPv4", 00:20:44.215 "traddr": "10.0.0.2", 00:20:44.215 "trsvcid": "4420" 00:20:44.215 }, 00:20:44.215 "peer_address": { 00:20:44.215 "trtype": "TCP", 00:20:44.215 "adrfam": "IPv4", 00:20:44.215 "traddr": "10.0.0.1", 00:20:44.215 "trsvcid": "34396" 00:20:44.215 }, 00:20:44.215 "auth": { 00:20:44.215 "state": "completed", 00:20:44.215 "digest": "sha512", 00:20:44.215 "dhgroup": "ffdhe4096" 00:20:44.215 } 00:20:44.215 } 00:20:44.215 ]' 00:20:44.215 19:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:44.215 19:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.215 19:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:44.215 19:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:44.215 19:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:44.215 19:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.215 19:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.215 19:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.475 19:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:YTI0MmFmM2IxMzE3NTYxNjZjYjkyYzFhNDcwMmIzZTAyOWNkOTdjNDEzMDU0NjBiYjZkNmI2NmY2NTA0ZjIyZq7CFVo=: 00:20:45.417 19:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.417 19:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:45.417 19:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.417 19:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.417 19:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.417 19:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.417 19:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:45.417 19:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:45.417 19:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:45.417 19:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:20:45.417 19:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:45.417 19:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:45.417 19:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:45.417 19:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:45.417 19:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:20:45.417 19:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.417 19:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.417 19:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.417 19:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:45.417 19:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:45.989 00:20:45.989 19:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:45.989 19:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:45.989 19:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.989 19:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.989 19:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.989 19:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.989 19:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.989 19:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.989 19:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:45.989 { 00:20:45.989 "cntlid": 129, 00:20:45.989 "qid": 0, 00:20:45.989 "state": "enabled", 00:20:45.989 "listen_address": { 00:20:45.989 "trtype": "TCP", 00:20:45.989 "adrfam": "IPv4", 00:20:45.989 "traddr": "10.0.0.2", 00:20:45.989 "trsvcid": "4420" 00:20:45.989 }, 00:20:45.989 "peer_address": { 00:20:45.989 "trtype": "TCP", 00:20:45.989 "adrfam": "IPv4", 00:20:45.989 "traddr": "10.0.0.1", 00:20:45.989 "trsvcid": "34424" 00:20:45.989 }, 00:20:45.989 "auth": { 00:20:45.989 "state": "completed", 00:20:45.989 "digest": "sha512", 00:20:45.989 "dhgroup": "ffdhe6144" 00:20:45.989 } 00:20:45.989 } 00:20:45.989 ]' 00:20:45.989 19:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:46.249 19:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.249 19:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:46.249 19:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:46.249 19:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:46.249 19:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.249 19:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.249 19:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.509 19:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZWE5M2FjMmM5NzE0ZThjOGQ3YWE2Y2QwYzU5MTIwN2Q3NjBiMzYyYzA0YTM5OTNiBPoO1A==: 00:20:47.080 19:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.341 19:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:47.341 19:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.341 19:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.341 19:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.341 19:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:47.341 19:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:47.341 19:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:47.341 19:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:20:47.341 19:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:47.341 19:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:47.341 19:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:47.341 19:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:47.341 19:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:47.341 19:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.341 19:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.341 19:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.341 19:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:47.341 19:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:47.912 00:20:47.912 19:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:47.912 19:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:47.912 19:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.172 19:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.172 19:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.172 19:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.172 19:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.172 19:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.172 19:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:48.172 { 00:20:48.172 "cntlid": 131, 00:20:48.172 "qid": 0, 00:20:48.172 "state": "enabled", 00:20:48.172 "listen_address": { 00:20:48.172 "trtype": "TCP", 00:20:48.172 "adrfam": "IPv4", 00:20:48.172 "traddr": "10.0.0.2", 00:20:48.172 "trsvcid": "4420" 00:20:48.172 }, 00:20:48.172 "peer_address": { 00:20:48.172 "trtype": "TCP", 00:20:48.172 "adrfam": "IPv4", 00:20:48.172 "traddr": "10.0.0.1", 00:20:48.172 "trsvcid": "34468" 00:20:48.172 }, 00:20:48.172 "auth": { 00:20:48.172 "state": "completed", 00:20:48.172 "digest": "sha512", 00:20:48.172 "dhgroup": "ffdhe6144" 00:20:48.172 } 00:20:48.172 } 00:20:48.172 ]' 00:20:48.172 19:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:48.172 19:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.172 19:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:48.172 19:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:48.172 19:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:48.172 19:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.172 19:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.172 19:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.432 19:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDA1ZDM4YjRjYjcyODYxODY5MTMzZGFjMmU2ZjBlMzWEOGg3: 00:20:49.373 19:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.373 19:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:49.373 19:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.373 19:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.374 19:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.374 19:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:49.374 19:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:49.374 19:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:49.374 19:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:20:49.374 19:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:49.374 19:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:49.374 19:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:49.374 19:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:49.374 19:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:20:49.374 19:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.374 19:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.374 19:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.374 19:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:49.374 19:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:49.944 00:20:49.944 19:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:49.944 19:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:49.944 19:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.944 19:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.944 19:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.944 19:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.944 19:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.944 19:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.944 19:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:49.944 { 00:20:49.944 "cntlid": 133, 00:20:49.944 "qid": 0, 00:20:49.944 "state": "enabled", 00:20:49.944 "listen_address": { 00:20:49.944 "trtype": "TCP", 00:20:49.944 "adrfam": "IPv4", 00:20:49.944 "traddr": "10.0.0.2", 00:20:49.944 "trsvcid": "4420" 00:20:49.944 }, 00:20:49.944 "peer_address": { 00:20:49.944 "trtype": "TCP", 00:20:49.944 "adrfam": "IPv4", 00:20:49.944 "traddr": "10.0.0.1", 00:20:49.944 "trsvcid": "34486" 00:20:49.944 }, 00:20:49.944 "auth": { 00:20:49.944 "state": "completed", 00:20:49.944 "digest": "sha512", 00:20:49.944 "dhgroup": "ffdhe6144" 00:20:49.944 } 00:20:49.944 } 00:20:49.944 ]' 00:20:49.944 19:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:50.204 19:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.204 19:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:50.204 19:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:50.204 19:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:50.204 19:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.204 19:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.204 19:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.464 19:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZWIyZjg3NjQ5ODNhNGQyOWI2NmRmNDY0MzBiNzViMTMxMjU1Y2JmYmNlZTc1OTcxvm1qIQ==: 00:20:51.035 19:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.035 19:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:51.295 19:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.295 19:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.295 19:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.295 19:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:51.295 19:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:51.295 19:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:51.295 19:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:20:51.295 19:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:51.295 19:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:51.295 19:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:51.295 19:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:51.295 19:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:51.295 19:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.295 19:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.295 19:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.295 19:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.295 19:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.865 00:20:51.865 19:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:51.866 19:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.866 19:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:52.126 19:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.126 19:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.126 19:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.126 19:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.126 19:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.126 19:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:52.126 { 00:20:52.126 "cntlid": 135, 00:20:52.126 "qid": 0, 00:20:52.126 "state": "enabled", 00:20:52.126 "listen_address": { 00:20:52.126 "trtype": "TCP", 00:20:52.126 "adrfam": "IPv4", 00:20:52.126 "traddr": "10.0.0.2", 00:20:52.126 "trsvcid": "4420" 00:20:52.126 }, 00:20:52.126 "peer_address": { 00:20:52.126 "trtype": "TCP", 00:20:52.126 "adrfam": "IPv4", 00:20:52.126 "traddr": "10.0.0.1", 00:20:52.126 "trsvcid": "34512" 00:20:52.126 }, 00:20:52.126 "auth": { 00:20:52.126 "state": "completed", 00:20:52.126 "digest": "sha512", 00:20:52.126 "dhgroup": "ffdhe6144" 00:20:52.126 } 00:20:52.126 } 00:20:52.126 ]' 00:20:52.126 19:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:52.126 19:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.126 19:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:52.126 19:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:52.126 19:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:52.126 19:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.126 19:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.126 19:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.387 19:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:YTI0MmFmM2IxMzE3NTYxNjZjYjkyYzFhNDcwMmIzZTAyOWNkOTdjNDEzMDU0NjBiYjZkNmI2NmY2NTA0ZjIyZq7CFVo=: 00:20:53.328 19:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.328 19:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:53.328 19:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.328 19:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.328 19:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.328 19:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.328 19:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:53.328 19:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:53.328 19:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:53.328 19:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:20:53.328 19:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:53.328 19:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:53.328 19:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:53.328 19:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:53.328 19:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:20:53.328 19:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.328 19:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.328 19:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.328 19:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:53.328 19:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:53.899 00:20:53.899 19:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:53.899 19:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:53.900 19:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.160 19:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.160 19:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.160 19:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.160 19:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.160 19:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.160 19:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:54.160 { 00:20:54.160 "cntlid": 137, 00:20:54.160 "qid": 0, 00:20:54.160 "state": "enabled", 00:20:54.160 "listen_address": { 00:20:54.160 "trtype": "TCP", 00:20:54.160 "adrfam": "IPv4", 00:20:54.160 "traddr": "10.0.0.2", 00:20:54.160 "trsvcid": "4420" 00:20:54.160 }, 00:20:54.160 "peer_address": { 00:20:54.160 "trtype": "TCP", 00:20:54.160 "adrfam": "IPv4", 00:20:54.160 "traddr": "10.0.0.1", 00:20:54.160 "trsvcid": "33688" 00:20:54.160 }, 00:20:54.160 "auth": { 00:20:54.160 "state": "completed", 00:20:54.160 "digest": "sha512", 00:20:54.160 "dhgroup": "ffdhe8192" 00:20:54.160 } 00:20:54.160 } 00:20:54.160 ]' 00:20:54.160 19:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:54.160 19:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.160 19:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:54.420 19:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:54.420 19:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:54.420 19:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.420 19:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.420 19:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.680 19:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZWE5M2FjMmM5NzE0ZThjOGQ3YWE2Y2QwYzU5MTIwN2Q3NjBiMzYyYzA0YTM5OTNiBPoO1A==: 00:20:55.251 19:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.251 19:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:55.251 19:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.251 19:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.251 19:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.251 19:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:55.251 19:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:55.251 19:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:55.511 19:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:20:55.511 19:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:55.511 19:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:55.511 19:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:55.511 19:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:55.511 19:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:55.511 19:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.511 19:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.511 19:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.511 19:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:55.511 19:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:56.082 00:20:56.082 19:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:56.082 19:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:56.082 19:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.342 19:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.342 19:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.342 19:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.342 19:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.342 19:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.342 19:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:56.342 { 00:20:56.342 "cntlid": 139, 00:20:56.342 "qid": 0, 00:20:56.342 "state": "enabled", 00:20:56.342 "listen_address": { 00:20:56.342 "trtype": "TCP", 00:20:56.342 "adrfam": "IPv4", 00:20:56.342 "traddr": "10.0.0.2", 00:20:56.342 "trsvcid": "4420" 00:20:56.342 }, 00:20:56.342 "peer_address": { 00:20:56.342 "trtype": "TCP", 00:20:56.342 "adrfam": "IPv4", 00:20:56.342 "traddr": "10.0.0.1", 00:20:56.342 "trsvcid": "33712" 00:20:56.342 }, 00:20:56.342 "auth": { 00:20:56.342 "state": "completed", 00:20:56.342 "digest": "sha512", 00:20:56.342 "dhgroup": "ffdhe8192" 00:20:56.342 } 00:20:56.342 } 00:20:56.342 ]' 00:20:56.342 19:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:56.342 19:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.342 19:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:56.602 19:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:56.602 19:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:56.602 19:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.602 19:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.602 19:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.863 19:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDA1ZDM4YjRjYjcyODYxODY5MTMzZGFjMmU2ZjBlMzWEOGg3: 00:20:57.433 19:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.433 19:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:57.433 19:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.433 19:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.433 19:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.433 19:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:57.433 19:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:57.433 19:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:57.693 19:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:20:57.693 19:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:57.693 19:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:57.693 19:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:57.693 19:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:57.693 19:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:20:57.693 19:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.693 19:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.693 19:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.693 19:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:57.693 19:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:58.262 00:20:58.262 19:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:58.262 19:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:58.262 19:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.523 19:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.523 19:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.523 19:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.523 19:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.523 19:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.523 19:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:58.523 { 00:20:58.523 "cntlid": 141, 00:20:58.523 "qid": 0, 00:20:58.523 "state": "enabled", 00:20:58.523 "listen_address": { 00:20:58.523 "trtype": "TCP", 00:20:58.523 "adrfam": "IPv4", 00:20:58.523 "traddr": "10.0.0.2", 00:20:58.523 "trsvcid": "4420" 00:20:58.523 }, 00:20:58.523 "peer_address": { 00:20:58.523 "trtype": "TCP", 00:20:58.523 "adrfam": "IPv4", 00:20:58.523 "traddr": "10.0.0.1", 00:20:58.523 "trsvcid": "33742" 00:20:58.523 }, 00:20:58.523 "auth": { 00:20:58.523 "state": "completed", 00:20:58.523 "digest": "sha512", 00:20:58.523 "dhgroup": "ffdhe8192" 00:20:58.523 } 00:20:58.523 } 00:20:58.523 ]' 00:20:58.523 19:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:58.523 19:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.523 19:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:58.523 19:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:58.523 19:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:58.523 19:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.523 19:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.523 19:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.783 19:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZWIyZjg3NjQ5ODNhNGQyOWI2NmRmNDY0MzBiNzViMTMxMjU1Y2JmYmNlZTc1OTcxvm1qIQ==: 00:20:59.353 19:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.353 19:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:59.353 19:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.353 19:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.612 19:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.612 19:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:59.613 19:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:59.613 19:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:59.613 19:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:20:59.613 19:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:59.613 19:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:59.613 19:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:59.613 19:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:59.613 19:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:59.613 19:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.613 19:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.613 19:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.613 19:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.613 19:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.181 00:21:00.442 19:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:00.442 19:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:00.442 19:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.442 19:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.442 19:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.442 19:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.442 19:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.442 19:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.442 19:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:00.442 { 00:21:00.442 "cntlid": 143, 00:21:00.442 "qid": 0, 00:21:00.442 "state": "enabled", 00:21:00.442 "listen_address": { 00:21:00.442 "trtype": "TCP", 00:21:00.442 "adrfam": "IPv4", 00:21:00.442 "traddr": "10.0.0.2", 00:21:00.442 "trsvcid": "4420" 00:21:00.442 }, 00:21:00.442 "peer_address": { 00:21:00.442 "trtype": "TCP", 00:21:00.442 "adrfam": "IPv4", 00:21:00.442 "traddr": "10.0.0.1", 00:21:00.442 "trsvcid": "33784" 00:21:00.442 }, 00:21:00.442 "auth": { 00:21:00.442 "state": "completed", 00:21:00.442 "digest": "sha512", 00:21:00.442 "dhgroup": "ffdhe8192" 00:21:00.442 } 00:21:00.442 } 00:21:00.442 ]' 00:21:00.442 19:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:00.702 19:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.702 19:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:00.702 19:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:00.702 19:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:00.702 19:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.702 19:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.702 19:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.962 19:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:YTI0MmFmM2IxMzE3NTYxNjZjYjkyYzFhNDcwMmIzZTAyOWNkOTdjNDEzMDU0NjBiYjZkNmI2NmY2NTA0ZjIyZq7CFVo=: 00:21:01.532 19:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.532 19:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:01.532 19:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.532 19:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.532 19:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.532 19:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:21:01.532 19:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:21:01.532 19:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:21:01.532 19:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:01.532 19:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:01.532 19:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:01.792 19:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:21:01.792 19:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:01.792 19:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:01.792 19:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:01.792 19:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:01.792 19:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:21:01.792 19:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.792 19:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.792 19:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.792 19:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:01.792 19:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:02.362 00:21:02.362 19:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:02.362 19:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.362 19:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:02.626 19:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.626 19:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.626 19:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.626 19:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.626 19:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.626 19:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:02.626 { 00:21:02.626 "cntlid": 145, 00:21:02.626 "qid": 0, 00:21:02.626 "state": "enabled", 00:21:02.626 "listen_address": { 00:21:02.626 "trtype": "TCP", 00:21:02.626 "adrfam": "IPv4", 00:21:02.626 "traddr": "10.0.0.2", 00:21:02.626 "trsvcid": "4420" 00:21:02.626 }, 00:21:02.626 "peer_address": { 00:21:02.626 "trtype": "TCP", 00:21:02.626 "adrfam": "IPv4", 00:21:02.626 "traddr": "10.0.0.1", 00:21:02.626 "trsvcid": "33806" 00:21:02.626 }, 00:21:02.626 "auth": { 00:21:02.626 "state": "completed", 00:21:02.626 "digest": "sha512", 00:21:02.626 "dhgroup": "ffdhe8192" 00:21:02.626 } 00:21:02.626 } 00:21:02.626 ]' 00:21:02.626 19:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:02.626 19:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.626 19:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:02.626 19:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:02.626 19:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:02.626 19:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.626 19:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.626 19:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.928 19:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZWE5M2FjMmM5NzE0ZThjOGQ3YWE2Y2QwYzU5MTIwN2Q3NjBiMzYyYzA0YTM5OTNiBPoO1A==: 00:21:03.526 19:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.787 19:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:03.787 19:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.787 19:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.787 19:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.787 19:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:21:03.787 19:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.787 19:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.787 19:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.787 19:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:03.787 19:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:03.787 19:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:03.787 19:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:03.787 19:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:03.787 19:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:03.787 19:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:03.787 19:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:03.787 19:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:04.359 request: 00:21:04.359 { 00:21:04.359 "name": "nvme0", 00:21:04.359 "trtype": "tcp", 00:21:04.359 "traddr": "10.0.0.2", 00:21:04.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:04.359 "adrfam": "ipv4", 00:21:04.359 "trsvcid": "4420", 00:21:04.359 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:04.359 "dhchap_key": "key2", 00:21:04.359 "method": "bdev_nvme_attach_controller", 00:21:04.359 "req_id": 1 00:21:04.359 } 00:21:04.359 Got JSON-RPC error response 00:21:04.359 response: 00:21:04.359 { 00:21:04.359 "code": -32602, 00:21:04.359 "message": "Invalid parameters" 00:21:04.359 } 00:21:04.359 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:04.359 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:04.359 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:04.359 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:04.359 19:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:04.359 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.359 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.359 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.359 19:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:21:04.359 19:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:21:04.359 19:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3594005 00:21:04.359 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3594005 ']' 00:21:04.359 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3594005 00:21:04.359 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:21:04.359 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:04.359 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3594005 00:21:04.359 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:04.359 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:04.359 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3594005' 00:21:04.359 killing process with pid 3594005 00:21:04.359 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3594005 00:21:04.359 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3594005 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:04.620 rmmod nvme_tcp 00:21:04.620 rmmod nvme_fabrics 00:21:04.620 rmmod nvme_keyring 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3593954 ']' 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3593954 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3593954 ']' 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3593954 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3593954 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3593954' 00:21:04.620 killing process with pid 3593954 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3593954 00:21:04.620 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3593954 00:21:04.881 19:36:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:04.881 19:36:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:04.881 19:36:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:04.881 19:36:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:04.881 19:36:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:04.881 19:36:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.881 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.881 19:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.792 19:36:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:06.792 19:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.9FN /tmp/spdk.key-sha256.EMt /tmp/spdk.key-sha384.TJe /tmp/spdk.key-sha512.EPJ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:06.792 00:21:06.792 real 2m33.772s 00:21:06.792 user 5m47.633s 00:21:06.792 sys 0m22.902s 00:21:06.792 19:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:06.792 19:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.792 ************************************ 00:21:06.792 END TEST nvmf_auth_target 00:21:06.792 ************************************ 00:21:07.053 19:36:32 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:21:07.053 19:36:32 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:07.053 19:36:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:21:07.053 19:36:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:07.053 19:36:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:07.053 ************************************ 00:21:07.053 START TEST nvmf_bdevio_no_huge 00:21:07.053 ************************************ 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:07.053 * Looking for test storage... 00:21:07.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:07.053 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.054 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:07.054 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:07.054 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:07.054 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.054 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.054 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.054 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:07.054 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:07.054 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:07.054 19:36:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:15.198 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:15.198 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:15.199 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:15.199 Found net devices under 0000:31:00.0: cvl_0_0 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:15.199 Found net devices under 0000:31:00.1: cvl_0_1 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:15.199 19:36:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:15.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:15.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.709 ms 00:21:15.199 00:21:15.199 --- 10.0.0.2 ping statistics --- 00:21:15.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.199 rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:15.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:15.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:21:15.199 00:21:15.199 --- 10.0.0.1 ping statistics --- 00:21:15.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.199 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:15.199 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:15.460 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3627436 00:21:15.460 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3627436 00:21:15.460 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:15.460 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 3627436 ']' 00:21:15.460 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.460 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:15.460 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.460 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:15.460 19:36:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:15.460 [2024-05-15 19:36:41.438882] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:21:15.460 [2024-05-15 19:36:41.438951] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:15.460 [2024-05-15 19:36:41.540894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:15.722 [2024-05-15 19:36:41.649379] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.722 [2024-05-15 19:36:41.649431] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.722 [2024-05-15 19:36:41.649440] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.722 [2024-05-15 19:36:41.649447] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.722 [2024-05-15 19:36:41.649453] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.722 [2024-05-15 19:36:41.649629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:15.722 [2024-05-15 19:36:41.649791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:15.722 [2024-05-15 19:36:41.649956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:15.722 [2024-05-15 19:36:41.649957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:16.293 [2024-05-15 19:36:42.395043] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:16.293 Malloc0 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:16.293 [2024-05-15 19:36:42.448286] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:16.293 [2024-05-15 19:36:42.448594] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.293 { 00:21:16.293 "params": { 00:21:16.293 "name": "Nvme$subsystem", 00:21:16.293 "trtype": "$TEST_TRANSPORT", 00:21:16.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.293 "adrfam": "ipv4", 00:21:16.293 "trsvcid": "$NVMF_PORT", 00:21:16.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.293 "hdgst": ${hdgst:-false}, 00:21:16.293 "ddgst": ${ddgst:-false} 00:21:16.293 }, 00:21:16.293 "method": "bdev_nvme_attach_controller" 00:21:16.293 } 00:21:16.293 EOF 00:21:16.293 )") 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:16.293 19:36:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:16.293 "params": { 00:21:16.293 "name": "Nvme1", 00:21:16.293 "trtype": "tcp", 00:21:16.293 "traddr": "10.0.0.2", 00:21:16.293 "adrfam": "ipv4", 00:21:16.293 "trsvcid": "4420", 00:21:16.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:16.293 "hdgst": false, 00:21:16.293 "ddgst": false 00:21:16.293 }, 00:21:16.293 "method": "bdev_nvme_attach_controller" 00:21:16.293 }' 00:21:16.554 [2024-05-15 19:36:42.501711] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:21:16.555 [2024-05-15 19:36:42.501780] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3627676 ] 00:21:16.555 [2024-05-15 19:36:42.595576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:16.555 [2024-05-15 19:36:42.704979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.555 [2024-05-15 19:36:42.705112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.555 [2024-05-15 19:36:42.705115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.815 I/O targets: 00:21:16.815 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:16.815 00:21:16.815 00:21:16.815 CUnit - A unit testing framework for C - Version 2.1-3 00:21:16.815 http://cunit.sourceforge.net/ 00:21:16.815 00:21:16.815 00:21:16.815 Suite: bdevio tests on: Nvme1n1 00:21:16.815 Test: blockdev write read block ...passed 00:21:16.815 Test: blockdev write zeroes read block ...passed 00:21:16.815 Test: blockdev write zeroes read no split ...passed 00:21:17.076 Test: blockdev write zeroes read split ...passed 00:21:17.076 Test: blockdev write zeroes read split partial ...passed 00:21:17.076 Test: blockdev reset ...[2024-05-15 19:36:43.075851] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:17.076 [2024-05-15 19:36:43.075915] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109dc70 (9): Bad file descriptor 00:21:17.076 [2024-05-15 19:36:43.128159] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:17.076 passed 00:21:17.076 Test: blockdev write read 8 blocks ...passed 00:21:17.076 Test: blockdev write read size > 128k ...passed 00:21:17.076 Test: blockdev write read invalid size ...passed 00:21:17.076 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:17.076 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:17.076 Test: blockdev write read max offset ...passed 00:21:17.076 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:17.338 Test: blockdev writev readv 8 blocks ...passed 00:21:17.338 Test: blockdev writev readv 30 x 1block ...passed 00:21:17.338 Test: blockdev writev readv block ...passed 00:21:17.338 Test: blockdev writev readv size > 128k ...passed 00:21:17.338 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:17.338 Test: blockdev comparev and writev ...[2024-05-15 19:36:43.354520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:17.338 [2024-05-15 19:36:43.354545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.338 [2024-05-15 19:36:43.354556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:17.338 [2024-05-15 19:36:43.354562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:17.338 [2024-05-15 19:36:43.355128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:17.338 [2024-05-15 19:36:43.355137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:17.338 [2024-05-15 19:36:43.355147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:17.338 [2024-05-15 19:36:43.355152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:17.338 [2024-05-15 19:36:43.355646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:17.338 [2024-05-15 19:36:43.355654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:17.338 [2024-05-15 19:36:43.355664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:17.338 [2024-05-15 19:36:43.355673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:17.338 [2024-05-15 19:36:43.356196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:17.338 [2024-05-15 19:36:43.356204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:17.338 [2024-05-15 19:36:43.356214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:17.338 [2024-05-15 19:36:43.356219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:17.338 passed 00:21:17.338 Test: blockdev nvme passthru rw ...passed 00:21:17.338 Test: blockdev nvme passthru vendor specific ...[2024-05-15 19:36:43.440993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:17.338 [2024-05-15 19:36:43.441004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:17.338 [2024-05-15 19:36:43.441393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:17.338 [2024-05-15 19:36:43.441402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:17.338 [2024-05-15 19:36:43.441804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:17.338 [2024-05-15 19:36:43.441811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:17.338 [2024-05-15 19:36:43.442203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:17.338 [2024-05-15 19:36:43.442210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:17.338 passed 00:21:17.338 Test: blockdev nvme admin passthru ...passed 00:21:17.338 Test: blockdev copy ...passed 00:21:17.338 00:21:17.338 Run Summary: Type Total Ran Passed Failed Inactive 00:21:17.338 suites 1 1 n/a 0 0 00:21:17.338 tests 23 23 23 0 0 00:21:17.338 asserts 152 152 152 0 n/a 00:21:17.338 00:21:17.338 Elapsed time = 1.278 seconds 00:21:17.599 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:17.599 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.599 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:17.599 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.599 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:17.599 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:17.599 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:17.599 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:17.599 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:17.599 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:17.599 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:17.599 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:17.599 rmmod nvme_tcp 00:21:17.859 rmmod nvme_fabrics 00:21:17.859 rmmod nvme_keyring 00:21:17.859 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:17.859 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:17.859 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:17.859 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3627436 ']' 00:21:17.859 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3627436 00:21:17.859 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 3627436 ']' 00:21:17.859 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 3627436 00:21:17.859 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:21:17.860 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:17.860 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3627436 00:21:17.860 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:21:17.860 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:21:17.860 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3627436' 00:21:17.860 killing process with pid 3627436 00:21:17.860 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 3627436 00:21:17.860 [2024-05-15 19:36:43.891083] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:17.860 19:36:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 3627436 00:21:18.120 19:36:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:18.120 19:36:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:18.120 19:36:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:18.121 19:36:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:18.121 19:36:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:18.121 19:36:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.121 19:36:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.121 19:36:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.669 19:36:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:20.669 00:21:20.669 real 0m13.218s 00:21:20.669 user 0m14.306s 00:21:20.669 sys 0m7.179s 00:21:20.669 19:36:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:20.669 19:36:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:20.669 ************************************ 00:21:20.669 END TEST nvmf_bdevio_no_huge 00:21:20.669 ************************************ 00:21:20.669 19:36:46 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:20.669 19:36:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:20.669 19:36:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:20.669 19:36:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:20.669 ************************************ 00:21:20.669 START TEST nvmf_tls 00:21:20.669 ************************************ 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:20.669 * Looking for test storage... 00:21:20.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.669 19:36:46 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:20.670 19:36:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:28.814 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:28.814 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:28.814 Found net devices under 0000:31:00.0: cvl_0_0 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:28.814 Found net devices under 0000:31:00.1: cvl_0_1 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:28.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.720 ms 00:21:28.814 00:21:28.814 --- 10.0.0.2 ping statistics --- 00:21:28.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.814 rtt min/avg/max/mdev = 0.720/0.720/0.720/0.000 ms 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:28.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:21:28.814 00:21:28.814 --- 10.0.0.1 ping statistics --- 00:21:28.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.814 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3632693 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3632693 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3632693 ']' 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:28.814 19:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.814 [2024-05-15 19:36:54.862624] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:21:28.814 [2024-05-15 19:36:54.862692] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.814 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.814 [2024-05-15 19:36:54.941544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.074 [2024-05-15 19:36:55.013491] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.074 [2024-05-15 19:36:55.013528] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.074 [2024-05-15 19:36:55.013540] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.074 [2024-05-15 19:36:55.013547] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.074 [2024-05-15 19:36:55.013552] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.075 [2024-05-15 19:36:55.013570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.646 19:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:29.646 19:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:29.646 19:36:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:29.646 19:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:29.646 19:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.646 19:36:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.646 19:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:29.646 19:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:29.907 true 00:21:29.907 19:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:29.907 19:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:30.169 19:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:30.169 19:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:30.169 19:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:30.429 19:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:30.429 19:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:30.429 19:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:30.429 19:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:30.429 19:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:30.690 19:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:30.690 19:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:30.950 19:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:30.950 19:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:30.950 19:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:30.950 19:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:31.210 19:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:31.210 19:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:31.210 19:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:31.470 19:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:31.470 19:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:31.470 19:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:31.470 19:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:31.470 19:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:31.730 19:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:31.730 19:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.1H23aOuBJo 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.vH6vcTP6MA 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.1H23aOuBJo 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.vH6vcTP6MA 00:21:31.991 19:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:32.251 19:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:32.511 19:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.1H23aOuBJo 00:21:32.511 19:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.1H23aOuBJo 00:21:32.511 19:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:32.772 [2024-05-15 19:36:58.803408] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.772 19:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:33.033 19:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:33.033 [2024-05-15 19:36:59.208411] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:33.033 [2024-05-15 19:36:59.208463] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:33.033 [2024-05-15 19:36:59.208642] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.292 19:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:33.292 malloc0 00:21:33.292 19:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:33.553 19:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1H23aOuBJo 00:21:33.813 [2024-05-15 19:36:59.816765] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:33.813 19:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.1H23aOuBJo 00:21:33.813 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.810 Initializing NVMe Controllers 00:21:43.810 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:43.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:43.810 Initialization complete. Launching workers. 00:21:43.810 ======================================================== 00:21:43.810 Latency(us) 00:21:43.810 Device Information : IOPS MiB/s Average min max 00:21:43.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13472.99 52.63 4750.91 1020.21 6491.73 00:21:43.810 ======================================================== 00:21:43.810 Total : 13472.99 52.63 4750.91 1020.21 6491.73 00:21:43.810 00:21:43.810 19:37:09 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1H23aOuBJo 00:21:43.810 19:37:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:43.810 19:37:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:43.810 19:37:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:43.810 19:37:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1H23aOuBJo' 00:21:43.810 19:37:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:43.810 19:37:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3635750 00:21:43.810 19:37:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:43.810 19:37:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3635750 /var/tmp/bdevperf.sock 00:21:43.810 19:37:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:43.810 19:37:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3635750 ']' 00:21:43.810 19:37:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.810 19:37:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:43.810 19:37:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.811 19:37:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:43.811 19:37:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.071 [2024-05-15 19:37:10.005258] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:21:44.071 [2024-05-15 19:37:10.005319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635750 ] 00:21:44.071 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.071 [2024-05-15 19:37:10.061877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.071 [2024-05-15 19:37:10.116252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.071 19:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:44.071 19:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:44.071 19:37:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1H23aOuBJo 00:21:44.331 [2024-05-15 19:37:10.371696] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:44.331 [2024-05-15 19:37:10.371755] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:44.331 TLSTESTn1 00:21:44.331 19:37:10 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:44.592 Running I/O for 10 seconds... 00:21:54.683 00:21:54.683 Latency(us) 00:21:54.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.683 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:54.683 Verification LBA range: start 0x0 length 0x2000 00:21:54.683 TLSTESTn1 : 10.02 3555.41 13.89 0.00 0.00 35948.99 5515.95 53084.16 00:21:54.683 =================================================================================================================== 00:21:54.683 Total : 3555.41 13.89 0.00 0.00 35948.99 5515.95 53084.16 00:21:54.683 0 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3635750 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3635750 ']' 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3635750 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3635750 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3635750' 00:21:54.683 killing process with pid 3635750 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3635750 00:21:54.683 Received shutdown signal, test time was about 10.000000 seconds 00:21:54.683 00:21:54.683 Latency(us) 00:21:54.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.683 =================================================================================================================== 00:21:54.683 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.683 [2024-05-15 19:37:20.674492] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3635750 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vH6vcTP6MA 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vH6vcTP6MA 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vH6vcTP6MA 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.vH6vcTP6MA' 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3637770 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3637770 /var/tmp/bdevperf.sock 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3637770 ']' 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:54.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:54.683 19:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.683 [2024-05-15 19:37:20.837801] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:21:54.683 [2024-05-15 19:37:20.837857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3637770 ] 00:21:54.683 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.945 [2024-05-15 19:37:20.892016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.945 [2024-05-15 19:37:20.943447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.945 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:54.945 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:54.945 19:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vH6vcTP6MA 00:21:55.206 [2024-05-15 19:37:21.198757] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:55.207 [2024-05-15 19:37:21.198821] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:55.207 [2024-05-15 19:37:21.203222] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:55.207 [2024-05-15 19:37:21.203865] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46b80 (107): Transport endpoint is not connected 00:21:55.207 [2024-05-15 19:37:21.204860] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46b80 (9): Bad file descriptor 00:21:55.207 [2024-05-15 19:37:21.205862] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:55.207 [2024-05-15 19:37:21.205869] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:55.207 [2024-05-15 19:37:21.205876] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:55.207 request: 00:21:55.207 { 00:21:55.207 "name": "TLSTEST", 00:21:55.207 "trtype": "tcp", 00:21:55.207 "traddr": "10.0.0.2", 00:21:55.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:55.207 "adrfam": "ipv4", 00:21:55.207 "trsvcid": "4420", 00:21:55.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:55.207 "psk": "/tmp/tmp.vH6vcTP6MA", 00:21:55.207 "method": "bdev_nvme_attach_controller", 00:21:55.207 "req_id": 1 00:21:55.207 } 00:21:55.207 Got JSON-RPC error response 00:21:55.207 response: 00:21:55.207 { 00:21:55.207 "code": -32602, 00:21:55.207 "message": "Invalid parameters" 00:21:55.207 } 00:21:55.207 19:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3637770 00:21:55.207 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3637770 ']' 00:21:55.207 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3637770 00:21:55.207 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:55.207 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:55.207 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3637770 00:21:55.207 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:55.207 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:55.207 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3637770' 00:21:55.207 killing process with pid 3637770 00:21:55.207 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3637770 00:21:55.207 Received shutdown signal, test time was about 10.000000 seconds 00:21:55.207 00:21:55.207 Latency(us) 00:21:55.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.207 =================================================================================================================== 00:21:55.207 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:55.207 [2024-05-15 19:37:21.292861] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:55.207 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3637770 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1H23aOuBJo 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1H23aOuBJo 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1H23aOuBJo 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1H23aOuBJo' 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3637794 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3637794 /var/tmp/bdevperf.sock 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3637794 ']' 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:55.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:55.469 19:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.469 [2024-05-15 19:37:21.458803] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:21:55.469 [2024-05-15 19:37:21.458854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3637794 ] 00:21:55.469 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.469 [2024-05-15 19:37:21.515120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.469 [2024-05-15 19:37:21.566077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.414 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:56.414 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:56.414 19:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.1H23aOuBJo 00:21:56.414 [2024-05-15 19:37:22.414892] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:56.414 [2024-05-15 19:37:22.414959] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:56.414 [2024-05-15 19:37:22.425691] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:56.414 [2024-05-15 19:37:22.425714] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:56.414 [2024-05-15 19:37:22.425739] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:56.414 [2024-05-15 19:37:22.426938] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1073b80 (107): Transport endpoint is not connected 00:21:56.414 [2024-05-15 19:37:22.427934] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1073b80 (9): Bad file descriptor 00:21:56.414 [2024-05-15 19:37:22.428935] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:56.414 [2024-05-15 19:37:22.428942] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:56.414 [2024-05-15 19:37:22.428949] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:56.414 request: 00:21:56.414 { 00:21:56.414 "name": "TLSTEST", 00:21:56.414 "trtype": "tcp", 00:21:56.414 "traddr": "10.0.0.2", 00:21:56.415 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:56.415 "adrfam": "ipv4", 00:21:56.415 "trsvcid": "4420", 00:21:56.415 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.415 "psk": "/tmp/tmp.1H23aOuBJo", 00:21:56.415 "method": "bdev_nvme_attach_controller", 00:21:56.415 "req_id": 1 00:21:56.415 } 00:21:56.415 Got JSON-RPC error response 00:21:56.415 response: 00:21:56.415 { 00:21:56.415 "code": -32602, 00:21:56.415 "message": "Invalid parameters" 00:21:56.415 } 00:21:56.415 19:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3637794 00:21:56.415 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3637794 ']' 00:21:56.415 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3637794 00:21:56.415 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:56.415 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:56.415 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3637794 00:21:56.415 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:56.415 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:56.415 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3637794' 00:21:56.415 killing process with pid 3637794 00:21:56.415 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3637794 00:21:56.415 Received shutdown signal, test time was about 10.000000 seconds 00:21:56.415 00:21:56.415 Latency(us) 00:21:56.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.415 =================================================================================================================== 00:21:56.415 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:56.415 [2024-05-15 19:37:22.511773] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:56.415 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3637794 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1H23aOuBJo 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1H23aOuBJo 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1H23aOuBJo 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1H23aOuBJo' 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3638124 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3638124 /var/tmp/bdevperf.sock 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3638124 ']' 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:56.676 19:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.676 [2024-05-15 19:37:22.665465] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:21:56.676 [2024-05-15 19:37:22.665517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3638124 ] 00:21:56.676 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.676 [2024-05-15 19:37:22.721043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.676 [2024-05-15 19:37:22.771361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.619 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:57.619 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:57.620 19:37:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1H23aOuBJo 00:21:57.620 [2024-05-15 19:37:23.636071] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:57.620 [2024-05-15 19:37:23.636129] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:57.620 [2024-05-15 19:37:23.640390] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:57.620 [2024-05-15 19:37:23.640411] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:57.620 [2024-05-15 19:37:23.640434] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:57.620 [2024-05-15 19:37:23.641094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c0b80 (107): Transport endpoint is not connected 00:21:57.620 [2024-05-15 19:37:23.642088] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c0b80 (9): Bad file descriptor 00:21:57.620 [2024-05-15 19:37:23.643090] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:57.620 [2024-05-15 19:37:23.643098] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:57.620 [2024-05-15 19:37:23.643105] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:57.620 request: 00:21:57.620 { 00:21:57.620 "name": "TLSTEST", 00:21:57.620 "trtype": "tcp", 00:21:57.620 "traddr": "10.0.0.2", 00:21:57.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:57.620 "adrfam": "ipv4", 00:21:57.620 "trsvcid": "4420", 00:21:57.620 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:57.620 "psk": "/tmp/tmp.1H23aOuBJo", 00:21:57.620 "method": "bdev_nvme_attach_controller", 00:21:57.620 "req_id": 1 00:21:57.620 } 00:21:57.620 Got JSON-RPC error response 00:21:57.620 response: 00:21:57.620 { 00:21:57.620 "code": -32602, 00:21:57.620 "message": "Invalid parameters" 00:21:57.620 } 00:21:57.620 19:37:23 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3638124 00:21:57.620 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3638124 ']' 00:21:57.620 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3638124 00:21:57.620 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:57.620 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:57.620 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3638124 00:21:57.620 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:57.620 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:57.620 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3638124' 00:21:57.620 killing process with pid 3638124 00:21:57.620 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3638124 00:21:57.620 Received shutdown signal, test time was about 10.000000 seconds 00:21:57.620 00:21:57.620 Latency(us) 00:21:57.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.620 =================================================================================================================== 00:21:57.620 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:57.620 [2024-05-15 19:37:23.726260] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:57.620 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3638124 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3638459 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3638459 /var/tmp/bdevperf.sock 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3638459 ']' 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:57.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:57.881 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:57.882 19:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.882 [2024-05-15 19:37:23.880405] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:21:57.882 [2024-05-15 19:37:23.880458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3638459 ] 00:21:57.882 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.882 [2024-05-15 19:37:23.936082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.882 [2024-05-15 19:37:23.986131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.824 19:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:58.824 19:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:58.824 19:37:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:58.824 [2024-05-15 19:37:24.811607] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:58.824 [2024-05-15 19:37:24.813359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x93e420 (9): Bad file descriptor 00:21:58.824 [2024-05-15 19:37:24.814358] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:58.824 [2024-05-15 19:37:24.814366] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:58.824 [2024-05-15 19:37:24.814373] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:58.824 request: 00:21:58.824 { 00:21:58.824 "name": "TLSTEST", 00:21:58.824 "trtype": "tcp", 00:21:58.824 "traddr": "10.0.0.2", 00:21:58.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:58.824 "adrfam": "ipv4", 00:21:58.824 "trsvcid": "4420", 00:21:58.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.824 "method": "bdev_nvme_attach_controller", 00:21:58.824 "req_id": 1 00:21:58.824 } 00:21:58.824 Got JSON-RPC error response 00:21:58.824 response: 00:21:58.824 { 00:21:58.824 "code": -32602, 00:21:58.824 "message": "Invalid parameters" 00:21:58.824 } 00:21:58.824 19:37:24 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3638459 00:21:58.824 19:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3638459 ']' 00:21:58.824 19:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3638459 00:21:58.824 19:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:58.824 19:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:58.824 19:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3638459 00:21:58.824 19:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:58.824 19:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:58.824 19:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3638459' 00:21:58.824 killing process with pid 3638459 00:21:58.824 19:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3638459 00:21:58.824 Received shutdown signal, test time was about 10.000000 seconds 00:21:58.824 00:21:58.824 Latency(us) 00:21:58.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.824 =================================================================================================================== 00:21:58.824 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:58.824 19:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3638459 00:21:58.824 19:37:24 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:58.824 19:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:58.824 19:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:58.824 19:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:58.824 19:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:58.824 19:37:24 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3632693 00:21:58.824 19:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3632693 ']' 00:21:58.825 19:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3632693 00:21:58.825 19:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:58.825 19:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:58.825 19:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3632693 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3632693' 00:21:59.087 killing process with pid 3632693 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3632693 00:21:59.087 [2024-05-15 19:37:25.054861] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:59.087 [2024-05-15 19:37:25.054893] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3632693 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.0gCA0d51z1 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.0gCA0d51z1 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3638661 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3638661 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3638661 ']' 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:59.087 19:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.348 [2024-05-15 19:37:25.291604] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:21:59.348 [2024-05-15 19:37:25.291660] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.348 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.348 [2024-05-15 19:37:25.363794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.348 [2024-05-15 19:37:25.430283] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.348 [2024-05-15 19:37:25.430328] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.348 [2024-05-15 19:37:25.430337] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.348 [2024-05-15 19:37:25.430343] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.349 [2024-05-15 19:37:25.430352] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.349 [2024-05-15 19:37:25.430370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.349 19:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:59.349 19:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:59.349 19:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:59.349 19:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:59.349 19:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.610 19:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.610 19:37:25 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.0gCA0d51z1 00:21:59.610 19:37:25 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.0gCA0d51z1 00:21:59.610 19:37:25 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:59.610 [2024-05-15 19:37:25.703474] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.610 19:37:25 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:59.870 19:37:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:59.870 [2024-05-15 19:37:26.000184] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:59.870 [2024-05-15 19:37:26.000231] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:59.870 [2024-05-15 19:37:26.000414] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.870 19:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:00.131 malloc0 00:22:00.131 19:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:00.131 19:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0gCA0d51z1 00:22:00.393 [2024-05-15 19:37:26.427838] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:00.393 19:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0gCA0d51z1 00:22:00.393 19:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:00.393 19:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:00.393 19:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:00.393 19:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0gCA0d51z1' 00:22:00.393 19:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:00.393 19:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3638849 00:22:00.393 19:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:00.393 19:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:00.393 19:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3638849 /var/tmp/bdevperf.sock 00:22:00.393 19:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3638849 ']' 00:22:00.393 19:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.393 19:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:00.393 19:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.393 19:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:00.393 19:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.393 [2024-05-15 19:37:26.473916] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:22:00.393 [2024-05-15 19:37:26.473964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3638849 ] 00:22:00.393 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.393 [2024-05-15 19:37:26.529441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.654 [2024-05-15 19:37:26.581433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.654 19:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:00.654 19:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:00.654 19:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0gCA0d51z1 00:22:00.654 [2024-05-15 19:37:26.800630] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:00.654 [2024-05-15 19:37:26.800696] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:00.916 TLSTESTn1 00:22:00.916 19:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:00.916 Running I/O for 10 seconds... 00:22:10.908 00:22:10.908 Latency(us) 00:22:10.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.908 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:10.908 Verification LBA range: start 0x0 length 0x2000 00:22:10.908 TLSTESTn1 : 10.06 3433.81 13.41 0.00 0.00 37165.72 6280.53 100051.63 00:22:10.908 =================================================================================================================== 00:22:10.908 Total : 3433.81 13.41 0.00 0.00 37165.72 6280.53 100051.63 00:22:10.908 0 00:22:10.908 19:37:37 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:10.908 19:37:37 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3638849 00:22:10.908 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3638849 ']' 00:22:10.908 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3638849 00:22:10.908 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3638849 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3638849' 00:22:11.170 killing process with pid 3638849 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3638849 00:22:11.170 Received shutdown signal, test time was about 10.000000 seconds 00:22:11.170 00:22:11.170 Latency(us) 00:22:11.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.170 =================================================================================================================== 00:22:11.170 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:11.170 [2024-05-15 19:37:37.147428] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3638849 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.0gCA0d51z1 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0gCA0d51z1 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0gCA0d51z1 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0gCA0d51z1 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0gCA0d51z1' 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3640968 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3640968 /var/tmp/bdevperf.sock 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3640968 ']' 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:11.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:11.170 19:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.170 [2024-05-15 19:37:37.316109] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:22:11.170 [2024-05-15 19:37:37.316163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640968 ] 00:22:11.170 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.431 [2024-05-15 19:37:37.371967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.431 [2024-05-15 19:37:37.424421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.004 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:12.004 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:12.004 19:37:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0gCA0d51z1 00:22:12.264 [2024-05-15 19:37:38.225119] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:12.264 [2024-05-15 19:37:38.225171] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:12.264 [2024-05-15 19:37:38.225176] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.0gCA0d51z1 00:22:12.264 request: 00:22:12.264 { 00:22:12.264 "name": "TLSTEST", 00:22:12.264 "trtype": "tcp", 00:22:12.264 "traddr": "10.0.0.2", 00:22:12.264 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:12.264 "adrfam": "ipv4", 00:22:12.264 "trsvcid": "4420", 00:22:12.264 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.264 "psk": "/tmp/tmp.0gCA0d51z1", 00:22:12.264 "method": "bdev_nvme_attach_controller", 00:22:12.264 "req_id": 1 00:22:12.264 } 00:22:12.264 Got JSON-RPC error response 00:22:12.264 response: 00:22:12.264 { 00:22:12.264 "code": -1, 00:22:12.264 "message": "Operation not permitted" 00:22:12.264 } 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3640968 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3640968 ']' 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3640968 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3640968 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3640968' 00:22:12.264 killing process with pid 3640968 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3640968 00:22:12.264 Received shutdown signal, test time was about 10.000000 seconds 00:22:12.264 00:22:12.264 Latency(us) 00:22:12.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.264 =================================================================================================================== 00:22:12.264 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3640968 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3638661 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3638661 ']' 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3638661 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:12.264 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3638661 00:22:12.525 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:12.525 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:12.525 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3638661' 00:22:12.525 killing process with pid 3638661 00:22:12.525 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3638661 00:22:12.525 [2024-05-15 19:37:38.475699] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:12.525 [2024-05-15 19:37:38.475740] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:12.525 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3638661 00:22:12.525 19:37:38 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:12.525 19:37:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:12.525 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:12.525 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.525 19:37:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3641210 00:22:12.525 19:37:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3641210 00:22:12.525 19:37:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:12.525 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3641210 ']' 00:22:12.525 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.525 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:12.525 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.525 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:12.525 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.525 [2024-05-15 19:37:38.671971] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:22:12.525 [2024-05-15 19:37:38.672021] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.525 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.785 [2024-05-15 19:37:38.748488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.785 [2024-05-15 19:37:38.810472] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.785 [2024-05-15 19:37:38.810513] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.785 [2024-05-15 19:37:38.810521] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.785 [2024-05-15 19:37:38.810527] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.785 [2024-05-15 19:37:38.810533] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.785 [2024-05-15 19:37:38.810552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.785 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:12.785 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:12.785 19:37:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:12.785 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:12.785 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.785 19:37:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.785 19:37:38 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.0gCA0d51z1 00:22:12.785 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:12.785 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.0gCA0d51z1 00:22:12.785 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:22:12.785 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.785 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:22:12.785 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.785 19:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.0gCA0d51z1 00:22:12.785 19:37:38 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.0gCA0d51z1 00:22:12.785 19:37:38 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:13.045 [2024-05-15 19:37:39.083823] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.045 19:37:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:13.305 19:37:39 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:13.305 [2024-05-15 19:37:39.396582] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:13.305 [2024-05-15 19:37:39.396634] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:13.305 [2024-05-15 19:37:39.396815] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.305 19:37:39 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:13.567 malloc0 00:22:13.567 19:37:39 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:13.567 19:37:39 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0gCA0d51z1 00:22:13.828 [2024-05-15 19:37:39.824332] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:13.828 [2024-05-15 19:37:39.824356] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:13.828 [2024-05-15 19:37:39.824383] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:13.828 request: 00:22:13.828 { 00:22:13.828 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:13.828 "host": "nqn.2016-06.io.spdk:host1", 00:22:13.828 "psk": "/tmp/tmp.0gCA0d51z1", 00:22:13.828 "method": "nvmf_subsystem_add_host", 00:22:13.828 "req_id": 1 00:22:13.828 } 00:22:13.828 Got JSON-RPC error response 00:22:13.828 response: 00:22:13.828 { 00:22:13.828 "code": -32603, 00:22:13.828 "message": "Internal error" 00:22:13.828 } 00:22:13.828 19:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:13.828 19:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:13.828 19:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:13.828 19:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:13.828 19:37:39 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3641210 00:22:13.828 19:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3641210 ']' 00:22:13.828 19:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3641210 00:22:13.828 19:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:13.828 19:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:13.828 19:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3641210 00:22:13.828 19:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:13.828 19:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:13.828 19:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3641210' 00:22:13.828 killing process with pid 3641210 00:22:13.828 19:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3641210 00:22:13.828 [2024-05-15 19:37:39.893609] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:13.828 19:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3641210 00:22:14.089 19:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.0gCA0d51z1 00:22:14.089 19:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:14.089 19:37:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:14.089 19:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:14.089 19:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:14.089 19:37:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3641572 00:22:14.089 19:37:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3641572 00:22:14.089 19:37:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:14.089 19:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3641572 ']' 00:22:14.089 19:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.089 19:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:14.089 19:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.089 19:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:14.089 19:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:14.089 [2024-05-15 19:37:40.091391] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:22:14.089 [2024-05-15 19:37:40.091441] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.089 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.089 [2024-05-15 19:37:40.165405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.089 [2024-05-15 19:37:40.229972] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.089 [2024-05-15 19:37:40.230008] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.089 [2024-05-15 19:37:40.230015] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.089 [2024-05-15 19:37:40.230022] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.089 [2024-05-15 19:37:40.230028] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.089 [2024-05-15 19:37:40.230052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.349 19:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:14.349 19:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:14.349 19:37:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:14.349 19:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:14.349 19:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:14.349 19:37:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.349 19:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.0gCA0d51z1 00:22:14.349 19:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.0gCA0d51z1 00:22:14.349 19:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:14.349 [2024-05-15 19:37:40.531549] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.609 19:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:14.609 19:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:14.869 [2024-05-15 19:37:40.832291] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:14.869 [2024-05-15 19:37:40.832343] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:14.869 [2024-05-15 19:37:40.832534] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.869 19:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:14.869 malloc0 00:22:14.869 19:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:15.128 19:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0gCA0d51z1 00:22:15.128 [2024-05-15 19:37:41.264061] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:15.128 19:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:15.128 19:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3641927 00:22:15.128 19:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:15.128 19:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3641927 /var/tmp/bdevperf.sock 00:22:15.128 19:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3641927 ']' 00:22:15.128 19:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:15.128 19:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:15.128 19:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:15.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:15.128 19:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:15.128 19:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.128 [2024-05-15 19:37:41.306889] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:22:15.128 [2024-05-15 19:37:41.306938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641927 ] 00:22:15.398 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.398 [2024-05-15 19:37:41.362756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.398 [2024-05-15 19:37:41.414645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.398 19:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:15.398 19:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:15.398 19:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0gCA0d51z1 00:22:15.671 [2024-05-15 19:37:41.641839] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:15.671 [2024-05-15 19:37:41.641909] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:15.671 TLSTESTn1 00:22:15.671 19:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:15.932 19:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:15.932 "subsystems": [ 00:22:15.932 { 00:22:15.932 "subsystem": "keyring", 00:22:15.932 "config": [] 00:22:15.932 }, 00:22:15.932 { 00:22:15.932 "subsystem": "iobuf", 00:22:15.932 "config": [ 00:22:15.932 { 00:22:15.932 "method": "iobuf_set_options", 00:22:15.932 "params": { 00:22:15.932 "small_pool_count": 8192, 00:22:15.932 "large_pool_count": 1024, 00:22:15.932 "small_bufsize": 8192, 00:22:15.932 "large_bufsize": 135168 00:22:15.932 } 00:22:15.932 } 00:22:15.932 ] 00:22:15.932 }, 00:22:15.932 { 00:22:15.932 "subsystem": "sock", 00:22:15.932 "config": [ 00:22:15.932 { 00:22:15.932 "method": "sock_impl_set_options", 00:22:15.932 "params": { 00:22:15.932 "impl_name": "posix", 00:22:15.932 "recv_buf_size": 2097152, 00:22:15.932 "send_buf_size": 2097152, 00:22:15.932 "enable_recv_pipe": true, 00:22:15.932 "enable_quickack": false, 00:22:15.932 "enable_placement_id": 0, 00:22:15.932 "enable_zerocopy_send_server": true, 00:22:15.932 "enable_zerocopy_send_client": false, 00:22:15.932 "zerocopy_threshold": 0, 00:22:15.932 "tls_version": 0, 00:22:15.932 "enable_ktls": false 00:22:15.932 } 00:22:15.932 }, 00:22:15.932 { 00:22:15.932 "method": "sock_impl_set_options", 00:22:15.932 "params": { 00:22:15.932 "impl_name": "ssl", 00:22:15.932 "recv_buf_size": 4096, 00:22:15.932 "send_buf_size": 4096, 00:22:15.932 "enable_recv_pipe": true, 00:22:15.932 "enable_quickack": false, 00:22:15.932 "enable_placement_id": 0, 00:22:15.933 "enable_zerocopy_send_server": true, 00:22:15.933 "enable_zerocopy_send_client": false, 00:22:15.933 "zerocopy_threshold": 0, 00:22:15.933 "tls_version": 0, 00:22:15.933 "enable_ktls": false 00:22:15.933 } 00:22:15.933 } 00:22:15.933 ] 00:22:15.933 }, 00:22:15.933 { 00:22:15.933 "subsystem": "vmd", 00:22:15.933 "config": [] 00:22:15.933 }, 00:22:15.933 { 00:22:15.933 "subsystem": "accel", 00:22:15.933 "config": [ 00:22:15.933 { 00:22:15.933 "method": "accel_set_options", 00:22:15.933 "params": { 00:22:15.933 "small_cache_size": 128, 00:22:15.933 "large_cache_size": 16, 00:22:15.933 "task_count": 2048, 00:22:15.933 "sequence_count": 2048, 00:22:15.933 "buf_count": 2048 00:22:15.933 } 00:22:15.933 } 00:22:15.933 ] 00:22:15.933 }, 00:22:15.933 { 00:22:15.933 "subsystem": "bdev", 00:22:15.933 "config": [ 00:22:15.933 { 00:22:15.933 "method": "bdev_set_options", 00:22:15.933 "params": { 00:22:15.933 "bdev_io_pool_size": 65535, 00:22:15.933 "bdev_io_cache_size": 256, 00:22:15.933 "bdev_auto_examine": true, 00:22:15.933 "iobuf_small_cache_size": 128, 00:22:15.933 "iobuf_large_cache_size": 16 00:22:15.933 } 00:22:15.933 }, 00:22:15.933 { 00:22:15.933 "method": "bdev_raid_set_options", 00:22:15.933 "params": { 00:22:15.933 "process_window_size_kb": 1024 00:22:15.933 } 00:22:15.933 }, 00:22:15.933 { 00:22:15.933 "method": "bdev_iscsi_set_options", 00:22:15.933 "params": { 00:22:15.933 "timeout_sec": 30 00:22:15.933 } 00:22:15.933 }, 00:22:15.933 { 00:22:15.933 "method": "bdev_nvme_set_options", 00:22:15.933 "params": { 00:22:15.933 "action_on_timeout": "none", 00:22:15.933 "timeout_us": 0, 00:22:15.933 "timeout_admin_us": 0, 00:22:15.933 "keep_alive_timeout_ms": 10000, 00:22:15.933 "arbitration_burst": 0, 00:22:15.933 "low_priority_weight": 0, 00:22:15.933 "medium_priority_weight": 0, 00:22:15.933 "high_priority_weight": 0, 00:22:15.933 "nvme_adminq_poll_period_us": 10000, 00:22:15.933 "nvme_ioq_poll_period_us": 0, 00:22:15.933 "io_queue_requests": 0, 00:22:15.933 "delay_cmd_submit": true, 00:22:15.933 "transport_retry_count": 4, 00:22:15.933 "bdev_retry_count": 3, 00:22:15.933 "transport_ack_timeout": 0, 00:22:15.933 "ctrlr_loss_timeout_sec": 0, 00:22:15.933 "reconnect_delay_sec": 0, 00:22:15.933 "fast_io_fail_timeout_sec": 0, 00:22:15.933 "disable_auto_failback": false, 00:22:15.933 "generate_uuids": false, 00:22:15.933 "transport_tos": 0, 00:22:15.933 "nvme_error_stat": false, 00:22:15.933 "rdma_srq_size": 0, 00:22:15.933 "io_path_stat": false, 00:22:15.933 "allow_accel_sequence": false, 00:22:15.933 "rdma_max_cq_size": 0, 00:22:15.933 "rdma_cm_event_timeout_ms": 0, 00:22:15.933 "dhchap_digests": [ 00:22:15.933 "sha256", 00:22:15.933 "sha384", 00:22:15.933 "sha512" 00:22:15.933 ], 00:22:15.933 "dhchap_dhgroups": [ 00:22:15.933 "null", 00:22:15.933 "ffdhe2048", 00:22:15.933 "ffdhe3072", 00:22:15.933 "ffdhe4096", 00:22:15.933 "ffdhe6144", 00:22:15.933 "ffdhe8192" 00:22:15.933 ] 00:22:15.933 } 00:22:15.933 }, 00:22:15.933 { 00:22:15.933 "method": "bdev_nvme_set_hotplug", 00:22:15.933 "params": { 00:22:15.933 "period_us": 100000, 00:22:15.933 "enable": false 00:22:15.933 } 00:22:15.933 }, 00:22:15.933 { 00:22:15.933 "method": "bdev_malloc_create", 00:22:15.933 "params": { 00:22:15.933 "name": "malloc0", 00:22:15.933 "num_blocks": 8192, 00:22:15.933 "block_size": 4096, 00:22:15.933 "physical_block_size": 4096, 00:22:15.933 "uuid": "0948bf01-aafa-47eb-8658-ee37f41c1778", 00:22:15.933 "optimal_io_boundary": 0 00:22:15.933 } 00:22:15.933 }, 00:22:15.933 { 00:22:15.933 "method": "bdev_wait_for_examine" 00:22:15.933 } 00:22:15.933 ] 00:22:15.933 }, 00:22:15.933 { 00:22:15.933 "subsystem": "nbd", 00:22:15.933 "config": [] 00:22:15.933 }, 00:22:15.933 { 00:22:15.933 "subsystem": "scheduler", 00:22:15.933 "config": [ 00:22:15.933 { 00:22:15.933 "method": "framework_set_scheduler", 00:22:15.933 "params": { 00:22:15.933 "name": "static" 00:22:15.933 } 00:22:15.933 } 00:22:15.933 ] 00:22:15.933 }, 00:22:15.933 { 00:22:15.933 "subsystem": "nvmf", 00:22:15.933 "config": [ 00:22:15.933 { 00:22:15.933 "method": "nvmf_set_config", 00:22:15.933 "params": { 00:22:15.933 "discovery_filter": "match_any", 00:22:15.933 "admin_cmd_passthru": { 00:22:15.933 "identify_ctrlr": false 00:22:15.933 } 00:22:15.933 } 00:22:15.933 }, 00:22:15.933 { 00:22:15.933 "method": "nvmf_set_max_subsystems", 00:22:15.933 "params": { 00:22:15.933 "max_subsystems": 1024 00:22:15.933 } 00:22:15.933 }, 00:22:15.933 { 00:22:15.933 "method": "nvmf_set_crdt", 00:22:15.933 "params": { 00:22:15.933 "crdt1": 0, 00:22:15.933 "crdt2": 0, 00:22:15.933 "crdt3": 0 00:22:15.933 } 00:22:15.933 }, 00:22:15.933 { 00:22:15.933 "method": "nvmf_create_transport", 00:22:15.933 "params": { 00:22:15.933 "trtype": "TCP", 00:22:15.933 "max_queue_depth": 128, 00:22:15.933 "max_io_qpairs_per_ctrlr": 127, 00:22:15.933 "in_capsule_data_size": 4096, 00:22:15.933 "max_io_size": 131072, 00:22:15.933 "io_unit_size": 131072, 00:22:15.933 "max_aq_depth": 128, 00:22:15.933 "num_shared_buffers": 511, 00:22:15.933 "buf_cache_size": 4294967295, 00:22:15.933 "dif_insert_or_strip": false, 00:22:15.933 "zcopy": false, 00:22:15.933 "c2h_success": false, 00:22:15.933 "sock_priority": 0, 00:22:15.933 "abort_timeout_sec": 1, 00:22:15.933 "ack_timeout": 0, 00:22:15.933 "data_wr_pool_size": 0 00:22:15.933 } 00:22:15.933 }, 00:22:15.933 { 00:22:15.933 "method": "nvmf_create_subsystem", 00:22:15.933 "params": { 00:22:15.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.933 "allow_any_host": false, 00:22:15.933 "serial_number": "SPDK00000000000001", 00:22:15.933 "model_number": "SPDK bdev Controller", 00:22:15.933 "max_namespaces": 10, 00:22:15.933 "min_cntlid": 1, 00:22:15.933 "max_cntlid": 65519, 00:22:15.933 "ana_reporting": false 00:22:15.933 } 00:22:15.933 }, 00:22:15.933 { 00:22:15.933 "method": "nvmf_subsystem_add_host", 00:22:15.933 "params": { 00:22:15.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.933 "host": "nqn.2016-06.io.spdk:host1", 00:22:15.933 "psk": "/tmp/tmp.0gCA0d51z1" 00:22:15.933 } 00:22:15.933 }, 00:22:15.933 { 00:22:15.933 "method": "nvmf_subsystem_add_ns", 00:22:15.933 "params": { 00:22:15.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.933 "namespace": { 00:22:15.933 "nsid": 1, 00:22:15.933 "bdev_name": "malloc0", 00:22:15.933 "nguid": "0948BF01AAFA47EB8658EE37F41C1778", 00:22:15.933 "uuid": "0948bf01-aafa-47eb-8658-ee37f41c1778", 00:22:15.933 "no_auto_visible": false 00:22:15.933 } 00:22:15.933 } 00:22:15.933 }, 00:22:15.933 { 00:22:15.933 "method": "nvmf_subsystem_add_listener", 00:22:15.933 "params": { 00:22:15.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.933 "listen_address": { 00:22:15.933 "trtype": "TCP", 00:22:15.933 "adrfam": "IPv4", 00:22:15.933 "traddr": "10.0.0.2", 00:22:15.933 "trsvcid": "4420" 00:22:15.933 }, 00:22:15.933 "secure_channel": true 00:22:15.933 } 00:22:15.933 } 00:22:15.933 ] 00:22:15.933 } 00:22:15.933 ] 00:22:15.933 }' 00:22:15.933 19:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:16.195 19:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:16.195 "subsystems": [ 00:22:16.195 { 00:22:16.195 "subsystem": "keyring", 00:22:16.195 "config": [] 00:22:16.195 }, 00:22:16.195 { 00:22:16.195 "subsystem": "iobuf", 00:22:16.195 "config": [ 00:22:16.195 { 00:22:16.195 "method": "iobuf_set_options", 00:22:16.195 "params": { 00:22:16.195 "small_pool_count": 8192, 00:22:16.195 "large_pool_count": 1024, 00:22:16.195 "small_bufsize": 8192, 00:22:16.195 "large_bufsize": 135168 00:22:16.195 } 00:22:16.195 } 00:22:16.195 ] 00:22:16.195 }, 00:22:16.195 { 00:22:16.195 "subsystem": "sock", 00:22:16.195 "config": [ 00:22:16.195 { 00:22:16.195 "method": "sock_impl_set_options", 00:22:16.195 "params": { 00:22:16.195 "impl_name": "posix", 00:22:16.195 "recv_buf_size": 2097152, 00:22:16.195 "send_buf_size": 2097152, 00:22:16.195 "enable_recv_pipe": true, 00:22:16.195 "enable_quickack": false, 00:22:16.195 "enable_placement_id": 0, 00:22:16.195 "enable_zerocopy_send_server": true, 00:22:16.195 "enable_zerocopy_send_client": false, 00:22:16.195 "zerocopy_threshold": 0, 00:22:16.195 "tls_version": 0, 00:22:16.195 "enable_ktls": false 00:22:16.195 } 00:22:16.195 }, 00:22:16.195 { 00:22:16.195 "method": "sock_impl_set_options", 00:22:16.195 "params": { 00:22:16.195 "impl_name": "ssl", 00:22:16.195 "recv_buf_size": 4096, 00:22:16.195 "send_buf_size": 4096, 00:22:16.195 "enable_recv_pipe": true, 00:22:16.195 "enable_quickack": false, 00:22:16.195 "enable_placement_id": 0, 00:22:16.195 "enable_zerocopy_send_server": true, 00:22:16.195 "enable_zerocopy_send_client": false, 00:22:16.195 "zerocopy_threshold": 0, 00:22:16.195 "tls_version": 0, 00:22:16.195 "enable_ktls": false 00:22:16.195 } 00:22:16.195 } 00:22:16.195 ] 00:22:16.195 }, 00:22:16.195 { 00:22:16.195 "subsystem": "vmd", 00:22:16.195 "config": [] 00:22:16.195 }, 00:22:16.195 { 00:22:16.195 "subsystem": "accel", 00:22:16.195 "config": [ 00:22:16.195 { 00:22:16.195 "method": "accel_set_options", 00:22:16.195 "params": { 00:22:16.195 "small_cache_size": 128, 00:22:16.195 "large_cache_size": 16, 00:22:16.195 "task_count": 2048, 00:22:16.195 "sequence_count": 2048, 00:22:16.195 "buf_count": 2048 00:22:16.195 } 00:22:16.195 } 00:22:16.195 ] 00:22:16.195 }, 00:22:16.196 { 00:22:16.196 "subsystem": "bdev", 00:22:16.196 "config": [ 00:22:16.196 { 00:22:16.196 "method": "bdev_set_options", 00:22:16.196 "params": { 00:22:16.196 "bdev_io_pool_size": 65535, 00:22:16.196 "bdev_io_cache_size": 256, 00:22:16.196 "bdev_auto_examine": true, 00:22:16.196 "iobuf_small_cache_size": 128, 00:22:16.196 "iobuf_large_cache_size": 16 00:22:16.196 } 00:22:16.196 }, 00:22:16.196 { 00:22:16.196 "method": "bdev_raid_set_options", 00:22:16.196 "params": { 00:22:16.196 "process_window_size_kb": 1024 00:22:16.196 } 00:22:16.196 }, 00:22:16.196 { 00:22:16.196 "method": "bdev_iscsi_set_options", 00:22:16.196 "params": { 00:22:16.196 "timeout_sec": 30 00:22:16.196 } 00:22:16.196 }, 00:22:16.196 { 00:22:16.196 "method": "bdev_nvme_set_options", 00:22:16.196 "params": { 00:22:16.196 "action_on_timeout": "none", 00:22:16.196 "timeout_us": 0, 00:22:16.196 "timeout_admin_us": 0, 00:22:16.196 "keep_alive_timeout_ms": 10000, 00:22:16.196 "arbitration_burst": 0, 00:22:16.196 "low_priority_weight": 0, 00:22:16.196 "medium_priority_weight": 0, 00:22:16.196 "high_priority_weight": 0, 00:22:16.196 "nvme_adminq_poll_period_us": 10000, 00:22:16.196 "nvme_ioq_poll_period_us": 0, 00:22:16.196 "io_queue_requests": 512, 00:22:16.196 "delay_cmd_submit": true, 00:22:16.196 "transport_retry_count": 4, 00:22:16.196 "bdev_retry_count": 3, 00:22:16.196 "transport_ack_timeout": 0, 00:22:16.196 "ctrlr_loss_timeout_sec": 0, 00:22:16.196 "reconnect_delay_sec": 0, 00:22:16.196 "fast_io_fail_timeout_sec": 0, 00:22:16.196 "disable_auto_failback": false, 00:22:16.196 "generate_uuids": false, 00:22:16.196 "transport_tos": 0, 00:22:16.196 "nvme_error_stat": false, 00:22:16.196 "rdma_srq_size": 0, 00:22:16.196 "io_path_stat": false, 00:22:16.196 "allow_accel_sequence": false, 00:22:16.196 "rdma_max_cq_size": 0, 00:22:16.196 "rdma_cm_event_timeout_ms": 0, 00:22:16.196 "dhchap_digests": [ 00:22:16.196 "sha256", 00:22:16.196 "sha384", 00:22:16.196 "sha512" 00:22:16.196 ], 00:22:16.196 "dhchap_dhgroups": [ 00:22:16.196 "null", 00:22:16.196 "ffdhe2048", 00:22:16.196 "ffdhe3072", 00:22:16.196 "ffdhe4096", 00:22:16.196 "ffdhe6144", 00:22:16.196 "ffdhe8192" 00:22:16.196 ] 00:22:16.196 } 00:22:16.196 }, 00:22:16.196 { 00:22:16.196 "method": "bdev_nvme_attach_controller", 00:22:16.196 "params": { 00:22:16.196 "name": "TLSTEST", 00:22:16.196 "trtype": "TCP", 00:22:16.196 "adrfam": "IPv4", 00:22:16.196 "traddr": "10.0.0.2", 00:22:16.196 "trsvcid": "4420", 00:22:16.196 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:16.196 "prchk_reftag": false, 00:22:16.196 "prchk_guard": false, 00:22:16.196 "ctrlr_loss_timeout_sec": 0, 00:22:16.196 "reconnect_delay_sec": 0, 00:22:16.196 "fast_io_fail_timeout_sec": 0, 00:22:16.196 "psk": "/tmp/tmp.0gCA0d51z1", 00:22:16.196 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:16.196 "hdgst": false, 00:22:16.196 "ddgst": false 00:22:16.196 } 00:22:16.196 }, 00:22:16.196 { 00:22:16.196 "method": "bdev_nvme_set_hotplug", 00:22:16.196 "params": { 00:22:16.196 "period_us": 100000, 00:22:16.196 "enable": false 00:22:16.196 } 00:22:16.196 }, 00:22:16.196 { 00:22:16.196 "method": "bdev_wait_for_examine" 00:22:16.196 } 00:22:16.196 ] 00:22:16.196 }, 00:22:16.196 { 00:22:16.196 "subsystem": "nbd", 00:22:16.196 "config": [] 00:22:16.196 } 00:22:16.196 ] 00:22:16.196 }' 00:22:16.196 19:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3641927 00:22:16.196 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3641927 ']' 00:22:16.196 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3641927 00:22:16.196 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:16.196 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:16.196 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3641927 00:22:16.196 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:16.196 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:16.196 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3641927' 00:22:16.196 killing process with pid 3641927 00:22:16.196 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3641927 00:22:16.196 Received shutdown signal, test time was about 10.000000 seconds 00:22:16.196 00:22:16.196 Latency(us) 00:22:16.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.196 =================================================================================================================== 00:22:16.196 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:16.196 [2024-05-15 19:37:42.362819] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:16.196 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3641927 00:22:16.456 19:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3641572 00:22:16.456 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3641572 ']' 00:22:16.456 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3641572 00:22:16.456 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:16.456 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:16.456 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3641572 00:22:16.456 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:16.456 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:16.456 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3641572' 00:22:16.456 killing process with pid 3641572 00:22:16.456 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3641572 00:22:16.456 [2024-05-15 19:37:42.529539] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:16.456 [2024-05-15 19:37:42.529575] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:16.456 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3641572 00:22:16.716 19:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:16.716 19:37:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:16.716 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:16.716 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.716 19:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:16.716 "subsystems": [ 00:22:16.716 { 00:22:16.716 "subsystem": "keyring", 00:22:16.716 "config": [] 00:22:16.716 }, 00:22:16.716 { 00:22:16.716 "subsystem": "iobuf", 00:22:16.716 "config": [ 00:22:16.716 { 00:22:16.716 "method": "iobuf_set_options", 00:22:16.716 "params": { 00:22:16.716 "small_pool_count": 8192, 00:22:16.716 "large_pool_count": 1024, 00:22:16.716 "small_bufsize": 8192, 00:22:16.716 "large_bufsize": 135168 00:22:16.716 } 00:22:16.716 } 00:22:16.716 ] 00:22:16.716 }, 00:22:16.716 { 00:22:16.716 "subsystem": "sock", 00:22:16.716 "config": [ 00:22:16.716 { 00:22:16.716 "method": "sock_impl_set_options", 00:22:16.716 "params": { 00:22:16.716 "impl_name": "posix", 00:22:16.716 "recv_buf_size": 2097152, 00:22:16.716 "send_buf_size": 2097152, 00:22:16.716 "enable_recv_pipe": true, 00:22:16.716 "enable_quickack": false, 00:22:16.716 "enable_placement_id": 0, 00:22:16.716 "enable_zerocopy_send_server": true, 00:22:16.716 "enable_zerocopy_send_client": false, 00:22:16.716 "zerocopy_threshold": 0, 00:22:16.716 "tls_version": 0, 00:22:16.716 "enable_ktls": false 00:22:16.716 } 00:22:16.716 }, 00:22:16.716 { 00:22:16.716 "method": "sock_impl_set_options", 00:22:16.716 "params": { 00:22:16.716 "impl_name": "ssl", 00:22:16.716 "recv_buf_size": 4096, 00:22:16.716 "send_buf_size": 4096, 00:22:16.716 "enable_recv_pipe": true, 00:22:16.716 "enable_quickack": false, 00:22:16.716 "enable_placement_id": 0, 00:22:16.716 "enable_zerocopy_send_server": true, 00:22:16.716 "enable_zerocopy_send_client": false, 00:22:16.716 "zerocopy_threshold": 0, 00:22:16.716 "tls_version": 0, 00:22:16.716 "enable_ktls": false 00:22:16.716 } 00:22:16.716 } 00:22:16.716 ] 00:22:16.716 }, 00:22:16.716 { 00:22:16.716 "subsystem": "vmd", 00:22:16.716 "config": [] 00:22:16.716 }, 00:22:16.716 { 00:22:16.716 "subsystem": "accel", 00:22:16.716 "config": [ 00:22:16.716 { 00:22:16.716 "method": "accel_set_options", 00:22:16.716 "params": { 00:22:16.716 "small_cache_size": 128, 00:22:16.716 "large_cache_size": 16, 00:22:16.716 "task_count": 2048, 00:22:16.716 "sequence_count": 2048, 00:22:16.716 "buf_count": 2048 00:22:16.716 } 00:22:16.717 } 00:22:16.717 ] 00:22:16.717 }, 00:22:16.717 { 00:22:16.717 "subsystem": "bdev", 00:22:16.717 "config": [ 00:22:16.717 { 00:22:16.717 "method": "bdev_set_options", 00:22:16.717 "params": { 00:22:16.717 "bdev_io_pool_size": 65535, 00:22:16.717 "bdev_io_cache_size": 256, 00:22:16.717 "bdev_auto_examine": true, 00:22:16.717 "iobuf_small_cache_size": 128, 00:22:16.717 "iobuf_large_cache_size": 16 00:22:16.717 } 00:22:16.717 }, 00:22:16.717 { 00:22:16.717 "method": "bdev_raid_set_options", 00:22:16.717 "params": { 00:22:16.717 "process_window_size_kb": 1024 00:22:16.717 } 00:22:16.717 }, 00:22:16.717 { 00:22:16.717 "method": "bdev_iscsi_set_options", 00:22:16.717 "params": { 00:22:16.717 "timeout_sec": 30 00:22:16.717 } 00:22:16.717 }, 00:22:16.717 { 00:22:16.717 "method": "bdev_nvme_set_options", 00:22:16.717 "params": { 00:22:16.717 "action_on_timeout": "none", 00:22:16.717 "timeout_us": 0, 00:22:16.717 "timeout_admin_us": 0, 00:22:16.717 "keep_alive_timeout_ms": 10000, 00:22:16.717 "arbitration_burst": 0, 00:22:16.717 "low_priority_weight": 0, 00:22:16.717 "medium_priority_weight": 0, 00:22:16.717 "high_priority_weight": 0, 00:22:16.717 "nvme_adminq_poll_period_us": 10000, 00:22:16.717 "nvme_ioq_poll_period_us": 0, 00:22:16.717 "io_queue_requests": 0, 00:22:16.717 "delay_cmd_submit": true, 00:22:16.717 "transport_retry_count": 4, 00:22:16.717 "bdev_retry_count": 3, 00:22:16.717 "transport_ack_timeout": 0, 00:22:16.717 "ctrlr_loss_timeout_sec": 0, 00:22:16.717 "reconnect_delay_sec": 0, 00:22:16.717 "fast_io_fail_timeout_sec": 0, 00:22:16.717 "disable_auto_failback": false, 00:22:16.717 "generate_uuids": false, 00:22:16.717 "transport_tos": 0, 00:22:16.717 "nvme_error_stat": false, 00:22:16.717 "rdma_srq_size": 0, 00:22:16.717 "io_path_stat": false, 00:22:16.717 "allow_accel_sequence": false, 00:22:16.717 "rdma_max_cq_size": 0, 00:22:16.717 "rdma_cm_event_timeout_ms": 0, 00:22:16.717 "dhchap_digests": [ 00:22:16.717 "sha256", 00:22:16.717 "sha384", 00:22:16.717 "sha512" 00:22:16.717 ], 00:22:16.717 "dhchap_dhgroups": [ 00:22:16.717 "null", 00:22:16.717 "ffdhe2048", 00:22:16.717 "ffdhe3072", 00:22:16.717 "ffdhe4096", 00:22:16.717 "ffdhe6144", 00:22:16.717 "ffdhe8192" 00:22:16.717 ] 00:22:16.717 } 00:22:16.717 }, 00:22:16.717 { 00:22:16.717 "method": "bdev_nvme_set_hotplug", 00:22:16.717 "params": { 00:22:16.717 "period_us": 100000, 00:22:16.717 "enable": false 00:22:16.717 } 00:22:16.717 }, 00:22:16.717 { 00:22:16.717 "method": "bdev_malloc_create", 00:22:16.717 "params": { 00:22:16.717 "name": "malloc0", 00:22:16.717 "num_blocks": 8192, 00:22:16.717 "block_size": 4096, 00:22:16.717 "physical_block_size": 4096, 00:22:16.717 "uuid": "0948bf01-aafa-47eb-8658-ee37f41c1778", 00:22:16.717 "optimal_io_boundary": 0 00:22:16.717 } 00:22:16.717 }, 00:22:16.717 { 00:22:16.717 "method": "bdev_wait_for_examine" 00:22:16.717 } 00:22:16.717 ] 00:22:16.717 }, 00:22:16.717 { 00:22:16.717 "subsystem": "nbd", 00:22:16.717 "config": [] 00:22:16.717 }, 00:22:16.717 { 00:22:16.717 "subsystem": "scheduler", 00:22:16.717 "config": [ 00:22:16.717 { 00:22:16.717 "method": "framework_set_scheduler", 00:22:16.717 "params": { 00:22:16.717 "name": "static" 00:22:16.717 } 00:22:16.717 } 00:22:16.717 ] 00:22:16.717 }, 00:22:16.717 { 00:22:16.717 "subsystem": "nvmf", 00:22:16.717 "config": [ 00:22:16.717 { 00:22:16.717 "method": "nvmf_set_config", 00:22:16.717 "params": { 00:22:16.717 "discovery_filter": "match_any", 00:22:16.717 "admin_cmd_passthru": { 00:22:16.717 "identify_ctrlr": false 00:22:16.717 } 00:22:16.717 } 00:22:16.717 }, 00:22:16.717 { 00:22:16.717 "method": "nvmf_set_max_subsystems", 00:22:16.717 "params": { 00:22:16.717 "max_subsystems": 1024 00:22:16.717 } 00:22:16.717 }, 00:22:16.717 { 00:22:16.717 "method": "nvmf_set_crdt", 00:22:16.717 "params": { 00:22:16.717 "crdt1": 0, 00:22:16.717 "crdt2": 0, 00:22:16.717 "crdt3": 0 00:22:16.717 } 00:22:16.717 }, 00:22:16.717 { 00:22:16.717 "method": "nvmf_create_transport", 00:22:16.717 "params": { 00:22:16.717 "trtype": "TCP", 00:22:16.717 "max_queue_depth": 128, 00:22:16.717 "max_io_qpairs_per_ctrlr": 127, 00:22:16.717 "in_capsule_data_size": 4096, 00:22:16.717 "max_io_size": 131072, 00:22:16.717 "io_unit_size": 131072, 00:22:16.717 "max_aq_depth": 128, 00:22:16.717 "num_shared_buffers": 511, 00:22:16.717 "buf_cache_size": 4294967295, 00:22:16.717 "dif_insert_or_strip": false, 00:22:16.717 "zcopy": false, 00:22:16.717 "c2h_success": false, 00:22:16.717 "sock_priority": 0, 00:22:16.717 "abort_timeout_sec": 1, 00:22:16.717 "ack_timeout": 0, 00:22:16.717 "data_wr_pool_size": 0 00:22:16.717 } 00:22:16.717 }, 00:22:16.717 { 00:22:16.717 "method": "nvmf_create_subsystem", 00:22:16.717 "params": { 00:22:16.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:16.717 "allow_any_host": false, 00:22:16.717 "serial_number": "SPDK00000000000001", 00:22:16.717 "model_number": "SPDK bdev Controller", 00:22:16.717 "max_namespaces": 10, 00:22:16.717 "min_cntlid": 1, 00:22:16.717 "max_cntlid": 65519, 00:22:16.717 "ana_reporting": false 00:22:16.717 } 00:22:16.717 }, 00:22:16.717 { 00:22:16.717 "method": "nvmf_subsystem_add_host", 00:22:16.717 "params": { 00:22:16.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:16.717 "host": "nqn.2016-06.io.spdk:host1", 00:22:16.718 "psk": "/tmp/tmp.0gCA0d51z1" 00:22:16.718 } 00:22:16.718 }, 00:22:16.718 { 00:22:16.718 "method": "nvmf_subsystem_add_ns", 00:22:16.718 "params": { 00:22:16.718 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:16.718 "namespace": { 00:22:16.718 "nsid": 1, 00:22:16.718 "bdev_name": "malloc0", 00:22:16.718 "nguid": "0948BF01AAFA47EB8658EE37F41C1778", 00:22:16.718 "uuid": "0948bf01-aafa-47eb-8658-ee37f41c1778", 00:22:16.718 "no_auto_visible": false 00:22:16.718 } 00:22:16.718 } 00:22:16.718 }, 00:22:16.718 { 00:22:16.718 "method": "nvmf_subsystem_add_listener", 00:22:16.718 "params": { 00:22:16.718 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:16.718 "listen_address": { 00:22:16.718 "trtype": "TCP", 00:22:16.718 "adrfam": "IPv4", 00:22:16.718 "traddr": "10.0.0.2", 00:22:16.718 "trsvcid": "4420" 00:22:16.718 }, 00:22:16.718 "secure_channel": true 00:22:16.718 } 00:22:16.718 } 00:22:16.718 ] 00:22:16.718 } 00:22:16.718 ] 00:22:16.718 }' 00:22:16.718 19:37:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3642163 00:22:16.718 19:37:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3642163 00:22:16.718 19:37:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:16.718 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3642163 ']' 00:22:16.718 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.718 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:16.718 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.718 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:16.718 19:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.718 [2024-05-15 19:37:42.729564] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:22:16.718 [2024-05-15 19:37:42.729619] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.718 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.718 [2024-05-15 19:37:42.801205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.718 [2024-05-15 19:37:42.867085] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.718 [2024-05-15 19:37:42.867125] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.718 [2024-05-15 19:37:42.867132] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.718 [2024-05-15 19:37:42.867139] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.718 [2024-05-15 19:37:42.867145] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.718 [2024-05-15 19:37:42.867202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.976 [2024-05-15 19:37:43.048375] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.976 [2024-05-15 19:37:43.064317] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:16.976 [2024-05-15 19:37:43.080353] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:16.976 [2024-05-15 19:37:43.080397] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:16.976 [2024-05-15 19:37:43.090606] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.544 19:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:17.544 19:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:17.544 19:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:17.544 19:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:17.544 19:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.544 19:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.544 19:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3642305 00:22:17.544 19:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3642305 /var/tmp/bdevperf.sock 00:22:17.544 19:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3642305 ']' 00:22:17.544 19:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:17.544 19:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:17.544 19:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:17.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:17.544 19:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:17.544 19:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:17.544 19:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.544 19:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:17.544 "subsystems": [ 00:22:17.544 { 00:22:17.544 "subsystem": "keyring", 00:22:17.544 "config": [] 00:22:17.544 }, 00:22:17.544 { 00:22:17.544 "subsystem": "iobuf", 00:22:17.544 "config": [ 00:22:17.544 { 00:22:17.544 "method": "iobuf_set_options", 00:22:17.544 "params": { 00:22:17.544 "small_pool_count": 8192, 00:22:17.544 "large_pool_count": 1024, 00:22:17.544 "small_bufsize": 8192, 00:22:17.544 "large_bufsize": 135168 00:22:17.544 } 00:22:17.544 } 00:22:17.544 ] 00:22:17.544 }, 00:22:17.544 { 00:22:17.544 "subsystem": "sock", 00:22:17.544 "config": [ 00:22:17.544 { 00:22:17.544 "method": "sock_impl_set_options", 00:22:17.544 "params": { 00:22:17.544 "impl_name": "posix", 00:22:17.544 "recv_buf_size": 2097152, 00:22:17.544 "send_buf_size": 2097152, 00:22:17.544 "enable_recv_pipe": true, 00:22:17.544 "enable_quickack": false, 00:22:17.544 "enable_placement_id": 0, 00:22:17.544 "enable_zerocopy_send_server": true, 00:22:17.544 "enable_zerocopy_send_client": false, 00:22:17.544 "zerocopy_threshold": 0, 00:22:17.544 "tls_version": 0, 00:22:17.544 "enable_ktls": false 00:22:17.544 } 00:22:17.544 }, 00:22:17.544 { 00:22:17.544 "method": "sock_impl_set_options", 00:22:17.544 "params": { 00:22:17.544 "impl_name": "ssl", 00:22:17.544 "recv_buf_size": 4096, 00:22:17.544 "send_buf_size": 4096, 00:22:17.544 "enable_recv_pipe": true, 00:22:17.544 "enable_quickack": false, 00:22:17.544 "enable_placement_id": 0, 00:22:17.544 "enable_zerocopy_send_server": true, 00:22:17.544 "enable_zerocopy_send_client": false, 00:22:17.544 "zerocopy_threshold": 0, 00:22:17.544 "tls_version": 0, 00:22:17.544 "enable_ktls": false 00:22:17.544 } 00:22:17.544 } 00:22:17.544 ] 00:22:17.544 }, 00:22:17.544 { 00:22:17.544 "subsystem": "vmd", 00:22:17.544 "config": [] 00:22:17.544 }, 00:22:17.544 { 00:22:17.544 "subsystem": "accel", 00:22:17.544 "config": [ 00:22:17.544 { 00:22:17.544 "method": "accel_set_options", 00:22:17.544 "params": { 00:22:17.544 "small_cache_size": 128, 00:22:17.544 "large_cache_size": 16, 00:22:17.544 "task_count": 2048, 00:22:17.544 "sequence_count": 2048, 00:22:17.544 "buf_count": 2048 00:22:17.544 } 00:22:17.544 } 00:22:17.544 ] 00:22:17.544 }, 00:22:17.544 { 00:22:17.544 "subsystem": "bdev", 00:22:17.544 "config": [ 00:22:17.544 { 00:22:17.544 "method": "bdev_set_options", 00:22:17.544 "params": { 00:22:17.544 "bdev_io_pool_size": 65535, 00:22:17.544 "bdev_io_cache_size": 256, 00:22:17.544 "bdev_auto_examine": true, 00:22:17.544 "iobuf_small_cache_size": 128, 00:22:17.544 "iobuf_large_cache_size": 16 00:22:17.544 } 00:22:17.544 }, 00:22:17.544 { 00:22:17.544 "method": "bdev_raid_set_options", 00:22:17.544 "params": { 00:22:17.544 "process_window_size_kb": 1024 00:22:17.544 } 00:22:17.544 }, 00:22:17.544 { 00:22:17.544 "method": "bdev_iscsi_set_options", 00:22:17.544 "params": { 00:22:17.544 "timeout_sec": 30 00:22:17.544 } 00:22:17.544 }, 00:22:17.544 { 00:22:17.544 "method": "bdev_nvme_set_options", 00:22:17.544 "params": { 00:22:17.544 "action_on_timeout": "none", 00:22:17.544 "timeout_us": 0, 00:22:17.544 "timeout_admin_us": 0, 00:22:17.544 "keep_alive_timeout_ms": 10000, 00:22:17.544 "arbitration_burst": 0, 00:22:17.544 "low_priority_weight": 0, 00:22:17.544 "medium_priority_weight": 0, 00:22:17.544 "high_priority_weight": 0, 00:22:17.544 "nvme_adminq_poll_period_us": 10000, 00:22:17.544 "nvme_ioq_poll_period_us": 0, 00:22:17.544 "io_queue_requests": 512, 00:22:17.544 "delay_cmd_submit": true, 00:22:17.544 "transport_retry_count": 4, 00:22:17.544 "bdev_retry_count": 3, 00:22:17.544 "transport_ack_timeout": 0, 00:22:17.544 "ctrlr_loss_timeout_sec": 0, 00:22:17.544 "reconnect_delay_sec": 0, 00:22:17.544 "fast_io_fail_timeout_sec": 0, 00:22:17.544 "disable_auto_failback": false, 00:22:17.544 "generate_uuids": false, 00:22:17.544 "transport_tos": 0, 00:22:17.544 "nvme_error_stat": false, 00:22:17.544 "rdma_srq_size": 0, 00:22:17.544 "io_path_stat": false, 00:22:17.544 "allow_accel_sequence": false, 00:22:17.544 "rdma_max_cq_size": 0, 00:22:17.544 "rdma_cm_event_timeout_ms": 0, 00:22:17.544 "dhchap_digests": [ 00:22:17.544 "sha256", 00:22:17.544 "sha384", 00:22:17.544 "sha512" 00:22:17.544 ], 00:22:17.544 "dhchap_dhgroups": [ 00:22:17.544 "null", 00:22:17.544 "ffdhe2048", 00:22:17.544 "ffdhe3072", 00:22:17.544 "ffdhe4096", 00:22:17.544 "ffdhe6144", 00:22:17.544 "ffdhe8192" 00:22:17.544 ] 00:22:17.544 } 00:22:17.544 }, 00:22:17.544 { 00:22:17.544 "method": "bdev_nvme_attach_controller", 00:22:17.544 "params": { 00:22:17.544 "name": "TLSTEST", 00:22:17.544 "trtype": "TCP", 00:22:17.544 "adrfam": "IPv4", 00:22:17.544 "traddr": "10.0.0.2", 00:22:17.544 "trsvcid": "4420", 00:22:17.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.544 "prchk_reftag": false, 00:22:17.544 "prchk_guard": false, 00:22:17.544 "ctrlr_loss_timeout_sec": 0, 00:22:17.544 "reconnect_delay_sec": 0, 00:22:17.544 "fast_io_fail_timeout_sec": 0, 00:22:17.544 "psk": "/tmp/tmp.0gCA0d51z1", 00:22:17.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:17.544 "hdgst": false, 00:22:17.544 "ddgst": false 00:22:17.544 } 00:22:17.544 }, 00:22:17.544 { 00:22:17.544 "method": "bdev_nvme_set_hotplug", 00:22:17.544 "params": { 00:22:17.544 "period_us": 100000, 00:22:17.544 "enable": false 00:22:17.544 } 00:22:17.544 }, 00:22:17.544 { 00:22:17.544 "method": "bdev_wait_for_examine" 00:22:17.544 } 00:22:17.544 ] 00:22:17.544 }, 00:22:17.544 { 00:22:17.544 "subsystem": "nbd", 00:22:17.544 "config": [] 00:22:17.544 } 00:22:17.544 ] 00:22:17.544 }' 00:22:17.544 [2024-05-15 19:37:43.674405] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:22:17.544 [2024-05-15 19:37:43.674461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642305 ] 00:22:17.544 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.805 [2024-05-15 19:37:43.728644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.805 [2024-05-15 19:37:43.780395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.805 [2024-05-15 19:37:43.896458] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:17.805 [2024-05-15 19:37:43.896520] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:18.377 19:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:18.377 19:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:18.377 19:37:44 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:18.640 Running I/O for 10 seconds... 00:22:28.636 00:22:28.636 Latency(us) 00:22:28.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.636 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:28.636 Verification LBA range: start 0x0 length 0x2000 00:22:28.636 TLSTESTn1 : 10.06 3400.08 13.28 0.00 0.00 37536.84 4724.05 53957.97 00:22:28.636 =================================================================================================================== 00:22:28.636 Total : 3400.08 13.28 0.00 0.00 37536.84 4724.05 53957.97 00:22:28.636 0 00:22:28.636 19:37:54 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:28.636 19:37:54 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3642305 00:22:28.636 19:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3642305 ']' 00:22:28.636 19:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3642305 00:22:28.636 19:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:28.636 19:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:28.636 19:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3642305 00:22:28.636 19:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:28.636 19:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:28.636 19:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3642305' 00:22:28.636 killing process with pid 3642305 00:22:28.636 19:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3642305 00:22:28.636 Received shutdown signal, test time was about 10.000000 seconds 00:22:28.636 00:22:28.636 Latency(us) 00:22:28.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.636 =================================================================================================================== 00:22:28.636 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:28.636 [2024-05-15 19:37:54.793361] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:28.636 19:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3642305 00:22:28.897 19:37:54 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3642163 00:22:28.897 19:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3642163 ']' 00:22:28.897 19:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3642163 00:22:28.897 19:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:28.897 19:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:28.897 19:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3642163 00:22:28.897 19:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:28.897 19:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:28.897 19:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3642163' 00:22:28.897 killing process with pid 3642163 00:22:28.897 19:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3642163 00:22:28.897 [2024-05-15 19:37:54.961849] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:28.897 [2024-05-15 19:37:54.961886] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:28.897 19:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3642163 00:22:29.158 19:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:29.158 19:37:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:29.158 19:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:29.158 19:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.158 19:37:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3644643 00:22:29.158 19:37:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3644643 00:22:29.158 19:37:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:29.158 19:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3644643 ']' 00:22:29.158 19:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.158 19:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:29.158 19:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.158 19:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:29.158 19:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.158 [2024-05-15 19:37:55.156602] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:22:29.158 [2024-05-15 19:37:55.156656] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.158 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.158 [2024-05-15 19:37:55.244346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.158 [2024-05-15 19:37:55.335636] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.158 [2024-05-15 19:37:55.335692] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.158 [2024-05-15 19:37:55.335701] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.158 [2024-05-15 19:37:55.335708] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.158 [2024-05-15 19:37:55.335714] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.158 [2024-05-15 19:37:55.335738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.102 19:37:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:30.102 19:37:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:30.102 19:37:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:30.102 19:37:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:30.102 19:37:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.102 19:37:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.102 19:37:56 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.0gCA0d51z1 00:22:30.102 19:37:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.0gCA0d51z1 00:22:30.102 19:37:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:30.102 [2024-05-15 19:37:56.280140] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.363 19:37:56 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:30.363 19:37:56 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:30.625 [2024-05-15 19:37:56.697173] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:30.625 [2024-05-15 19:37:56.697260] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:30.625 [2024-05-15 19:37:56.697534] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.625 19:37:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:30.887 malloc0 00:22:30.887 19:37:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:31.149 19:37:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0gCA0d51z1 00:22:31.412 [2024-05-15 19:37:57.353273] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:31.412 19:37:57 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3645015 00:22:31.412 19:37:57 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:31.412 19:37:57 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:31.412 19:37:57 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3645015 /var/tmp/bdevperf.sock 00:22:31.412 19:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3645015 ']' 00:22:31.412 19:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.412 19:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:31.412 19:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.412 19:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:31.412 19:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.412 [2024-05-15 19:37:57.431537] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:22:31.412 [2024-05-15 19:37:57.431607] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645015 ] 00:22:31.412 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.412 [2024-05-15 19:37:57.501385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.412 [2024-05-15 19:37:57.574354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.355 19:37:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:32.355 19:37:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:32.355 19:37:58 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0gCA0d51z1 00:22:32.355 19:37:58 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:32.616 [2024-05-15 19:37:58.677900] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:32.616 nvme0n1 00:22:32.616 19:37:58 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:32.876 Running I/O for 1 seconds... 00:22:33.833 00:22:33.833 Latency(us) 00:22:33.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.833 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:33.833 Verification LBA range: start 0x0 length 0x2000 00:22:33.833 nvme0n1 : 1.08 867.16 3.39 0.00 0.00 143817.42 8738.13 186996.05 00:22:33.833 =================================================================================================================== 00:22:33.833 Total : 867.16 3.39 0.00 0.00 143817.42 8738.13 186996.05 00:22:33.833 0 00:22:33.833 19:37:59 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3645015 00:22:33.833 19:37:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3645015 ']' 00:22:33.833 19:37:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3645015 00:22:33.833 19:37:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:33.833 19:37:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:33.833 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3645015 00:22:34.138 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:34.138 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:34.138 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3645015' 00:22:34.138 killing process with pid 3645015 00:22:34.138 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3645015 00:22:34.138 Received shutdown signal, test time was about 1.000000 seconds 00:22:34.138 00:22:34.138 Latency(us) 00:22:34.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.138 =================================================================================================================== 00:22:34.138 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:34.138 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3645015 00:22:34.138 19:38:00 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3644643 00:22:34.138 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3644643 ']' 00:22:34.138 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3644643 00:22:34.138 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:34.138 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:34.138 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3644643 00:22:34.138 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:34.138 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:34.138 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3644643' 00:22:34.138 killing process with pid 3644643 00:22:34.138 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3644643 00:22:34.138 [2024-05-15 19:38:00.243579] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:34.138 [2024-05-15 19:38:00.243623] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:34.138 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3644643 00:22:34.398 19:38:00 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:22:34.398 19:38:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:34.398 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:34.398 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.398 19:38:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3645698 00:22:34.398 19:38:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3645698 00:22:34.398 19:38:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:34.398 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3645698 ']' 00:22:34.398 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.398 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:34.398 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.398 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:34.398 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.398 [2024-05-15 19:38:00.439637] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:22:34.398 [2024-05-15 19:38:00.439687] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.399 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.399 [2024-05-15 19:38:00.514824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.399 [2024-05-15 19:38:00.577619] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.399 [2024-05-15 19:38:00.577658] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.399 [2024-05-15 19:38:00.577666] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.399 [2024-05-15 19:38:00.577672] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.399 [2024-05-15 19:38:00.577678] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.399 [2024-05-15 19:38:00.577698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.658 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:34.658 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:34.658 19:38:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:34.658 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:34.658 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.658 19:38:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.658 19:38:00 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:22:34.658 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.658 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.658 [2024-05-15 19:38:00.710980] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.658 malloc0 00:22:34.658 [2024-05-15 19:38:00.737667] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:34.658 [2024-05-15 19:38:00.737716] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:34.658 [2024-05-15 19:38:00.737897] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.658 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.658 19:38:00 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3645719 00:22:34.658 19:38:00 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3645719 /var/tmp/bdevperf.sock 00:22:34.658 19:38:00 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:34.658 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3645719 ']' 00:22:34.658 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:34.658 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:34.658 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:34.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:34.658 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:34.658 19:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.658 [2024-05-15 19:38:00.813084] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:22:34.658 [2024-05-15 19:38:00.813132] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645719 ] 00:22:34.658 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.918 [2024-05-15 19:38:00.876578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.918 [2024-05-15 19:38:00.940531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.918 19:38:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:34.918 19:38:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:34.918 19:38:01 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0gCA0d51z1 00:22:35.178 19:38:01 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:35.438 [2024-05-15 19:38:01.425909] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:35.438 nvme0n1 00:22:35.438 19:38:01 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:35.700 Running I/O for 1 seconds... 00:22:36.641 00:22:36.641 Latency(us) 00:22:36.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.641 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:36.641 Verification LBA range: start 0x0 length 0x2000 00:22:36.641 nvme0n1 : 1.05 2861.69 11.18 0.00 0.00 43645.99 8465.07 80827.73 00:22:36.641 =================================================================================================================== 00:22:36.641 Total : 2861.69 11.18 0.00 0.00 43645.99 8465.07 80827.73 00:22:36.641 0 00:22:36.641 19:38:02 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:22:36.642 19:38:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.642 19:38:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.902 19:38:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.902 19:38:02 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:22:36.902 "subsystems": [ 00:22:36.902 { 00:22:36.902 "subsystem": "keyring", 00:22:36.902 "config": [ 00:22:36.902 { 00:22:36.902 "method": "keyring_file_add_key", 00:22:36.902 "params": { 00:22:36.902 "name": "key0", 00:22:36.902 "path": "/tmp/tmp.0gCA0d51z1" 00:22:36.902 } 00:22:36.902 } 00:22:36.902 ] 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "subsystem": "iobuf", 00:22:36.902 "config": [ 00:22:36.902 { 00:22:36.902 "method": "iobuf_set_options", 00:22:36.902 "params": { 00:22:36.902 "small_pool_count": 8192, 00:22:36.902 "large_pool_count": 1024, 00:22:36.902 "small_bufsize": 8192, 00:22:36.902 "large_bufsize": 135168 00:22:36.902 } 00:22:36.902 } 00:22:36.902 ] 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "subsystem": "sock", 00:22:36.902 "config": [ 00:22:36.902 { 00:22:36.902 "method": "sock_impl_set_options", 00:22:36.902 "params": { 00:22:36.902 "impl_name": "posix", 00:22:36.902 "recv_buf_size": 2097152, 00:22:36.902 "send_buf_size": 2097152, 00:22:36.902 "enable_recv_pipe": true, 00:22:36.902 "enable_quickack": false, 00:22:36.902 "enable_placement_id": 0, 00:22:36.902 "enable_zerocopy_send_server": true, 00:22:36.902 "enable_zerocopy_send_client": false, 00:22:36.902 "zerocopy_threshold": 0, 00:22:36.902 "tls_version": 0, 00:22:36.902 "enable_ktls": false 00:22:36.902 } 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "method": "sock_impl_set_options", 00:22:36.902 "params": { 00:22:36.902 "impl_name": "ssl", 00:22:36.902 "recv_buf_size": 4096, 00:22:36.902 "send_buf_size": 4096, 00:22:36.902 "enable_recv_pipe": true, 00:22:36.902 "enable_quickack": false, 00:22:36.902 "enable_placement_id": 0, 00:22:36.902 "enable_zerocopy_send_server": true, 00:22:36.902 "enable_zerocopy_send_client": false, 00:22:36.902 "zerocopy_threshold": 0, 00:22:36.902 "tls_version": 0, 00:22:36.902 "enable_ktls": false 00:22:36.902 } 00:22:36.902 } 00:22:36.902 ] 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "subsystem": "vmd", 00:22:36.902 "config": [] 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "subsystem": "accel", 00:22:36.902 "config": [ 00:22:36.902 { 00:22:36.902 "method": "accel_set_options", 00:22:36.902 "params": { 00:22:36.902 "small_cache_size": 128, 00:22:36.902 "large_cache_size": 16, 00:22:36.902 "task_count": 2048, 00:22:36.902 "sequence_count": 2048, 00:22:36.902 "buf_count": 2048 00:22:36.902 } 00:22:36.902 } 00:22:36.902 ] 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "subsystem": "bdev", 00:22:36.902 "config": [ 00:22:36.902 { 00:22:36.902 "method": "bdev_set_options", 00:22:36.902 "params": { 00:22:36.902 "bdev_io_pool_size": 65535, 00:22:36.902 "bdev_io_cache_size": 256, 00:22:36.902 "bdev_auto_examine": true, 00:22:36.902 "iobuf_small_cache_size": 128, 00:22:36.902 "iobuf_large_cache_size": 16 00:22:36.902 } 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "method": "bdev_raid_set_options", 00:22:36.902 "params": { 00:22:36.902 "process_window_size_kb": 1024 00:22:36.902 } 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "method": "bdev_iscsi_set_options", 00:22:36.902 "params": { 00:22:36.902 "timeout_sec": 30 00:22:36.902 } 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "method": "bdev_nvme_set_options", 00:22:36.902 "params": { 00:22:36.902 "action_on_timeout": "none", 00:22:36.902 "timeout_us": 0, 00:22:36.902 "timeout_admin_us": 0, 00:22:36.902 "keep_alive_timeout_ms": 10000, 00:22:36.902 "arbitration_burst": 0, 00:22:36.902 "low_priority_weight": 0, 00:22:36.902 "medium_priority_weight": 0, 00:22:36.902 "high_priority_weight": 0, 00:22:36.902 "nvme_adminq_poll_period_us": 10000, 00:22:36.902 "nvme_ioq_poll_period_us": 0, 00:22:36.902 "io_queue_requests": 0, 00:22:36.902 "delay_cmd_submit": true, 00:22:36.902 "transport_retry_count": 4, 00:22:36.902 "bdev_retry_count": 3, 00:22:36.902 "transport_ack_timeout": 0, 00:22:36.902 "ctrlr_loss_timeout_sec": 0, 00:22:36.902 "reconnect_delay_sec": 0, 00:22:36.902 "fast_io_fail_timeout_sec": 0, 00:22:36.902 "disable_auto_failback": false, 00:22:36.902 "generate_uuids": false, 00:22:36.902 "transport_tos": 0, 00:22:36.902 "nvme_error_stat": false, 00:22:36.902 "rdma_srq_size": 0, 00:22:36.902 "io_path_stat": false, 00:22:36.902 "allow_accel_sequence": false, 00:22:36.902 "rdma_max_cq_size": 0, 00:22:36.902 "rdma_cm_event_timeout_ms": 0, 00:22:36.902 "dhchap_digests": [ 00:22:36.902 "sha256", 00:22:36.902 "sha384", 00:22:36.902 "sha512" 00:22:36.902 ], 00:22:36.902 "dhchap_dhgroups": [ 00:22:36.902 "null", 00:22:36.902 "ffdhe2048", 00:22:36.902 "ffdhe3072", 00:22:36.902 "ffdhe4096", 00:22:36.902 "ffdhe6144", 00:22:36.902 "ffdhe8192" 00:22:36.902 ] 00:22:36.902 } 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "method": "bdev_nvme_set_hotplug", 00:22:36.902 "params": { 00:22:36.902 "period_us": 100000, 00:22:36.902 "enable": false 00:22:36.902 } 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "method": "bdev_malloc_create", 00:22:36.902 "params": { 00:22:36.902 "name": "malloc0", 00:22:36.902 "num_blocks": 8192, 00:22:36.902 "block_size": 4096, 00:22:36.902 "physical_block_size": 4096, 00:22:36.902 "uuid": "5d62884e-01e6-4ea6-810f-5855c935199e", 00:22:36.902 "optimal_io_boundary": 0 00:22:36.902 } 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "method": "bdev_wait_for_examine" 00:22:36.902 } 00:22:36.902 ] 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "subsystem": "nbd", 00:22:36.902 "config": [] 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "subsystem": "scheduler", 00:22:36.902 "config": [ 00:22:36.902 { 00:22:36.902 "method": "framework_set_scheduler", 00:22:36.902 "params": { 00:22:36.902 "name": "static" 00:22:36.902 } 00:22:36.902 } 00:22:36.902 ] 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "subsystem": "nvmf", 00:22:36.902 "config": [ 00:22:36.902 { 00:22:36.902 "method": "nvmf_set_config", 00:22:36.902 "params": { 00:22:36.902 "discovery_filter": "match_any", 00:22:36.902 "admin_cmd_passthru": { 00:22:36.902 "identify_ctrlr": false 00:22:36.902 } 00:22:36.902 } 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "method": "nvmf_set_max_subsystems", 00:22:36.902 "params": { 00:22:36.902 "max_subsystems": 1024 00:22:36.902 } 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "method": "nvmf_set_crdt", 00:22:36.902 "params": { 00:22:36.902 "crdt1": 0, 00:22:36.902 "crdt2": 0, 00:22:36.902 "crdt3": 0 00:22:36.902 } 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "method": "nvmf_create_transport", 00:22:36.902 "params": { 00:22:36.902 "trtype": "TCP", 00:22:36.902 "max_queue_depth": 128, 00:22:36.902 "max_io_qpairs_per_ctrlr": 127, 00:22:36.902 "in_capsule_data_size": 4096, 00:22:36.902 "max_io_size": 131072, 00:22:36.902 "io_unit_size": 131072, 00:22:36.902 "max_aq_depth": 128, 00:22:36.902 "num_shared_buffers": 511, 00:22:36.902 "buf_cache_size": 4294967295, 00:22:36.902 "dif_insert_or_strip": false, 00:22:36.902 "zcopy": false, 00:22:36.902 "c2h_success": false, 00:22:36.902 "sock_priority": 0, 00:22:36.902 "abort_timeout_sec": 1, 00:22:36.902 "ack_timeout": 0, 00:22:36.902 "data_wr_pool_size": 0 00:22:36.902 } 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "method": "nvmf_create_subsystem", 00:22:36.902 "params": { 00:22:36.902 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.902 "allow_any_host": false, 00:22:36.902 "serial_number": "00000000000000000000", 00:22:36.902 "model_number": "SPDK bdev Controller", 00:22:36.902 "max_namespaces": 32, 00:22:36.902 "min_cntlid": 1, 00:22:36.902 "max_cntlid": 65519, 00:22:36.902 "ana_reporting": false 00:22:36.902 } 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "method": "nvmf_subsystem_add_host", 00:22:36.902 "params": { 00:22:36.902 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.902 "host": "nqn.2016-06.io.spdk:host1", 00:22:36.902 "psk": "key0" 00:22:36.902 } 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "method": "nvmf_subsystem_add_ns", 00:22:36.902 "params": { 00:22:36.902 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.902 "namespace": { 00:22:36.902 "nsid": 1, 00:22:36.902 "bdev_name": "malloc0", 00:22:36.902 "nguid": "5D62884E01E64EA6810F5855C935199E", 00:22:36.902 "uuid": "5d62884e-01e6-4ea6-810f-5855c935199e", 00:22:36.902 "no_auto_visible": false 00:22:36.902 } 00:22:36.902 } 00:22:36.902 }, 00:22:36.902 { 00:22:36.902 "method": "nvmf_subsystem_add_listener", 00:22:36.902 "params": { 00:22:36.902 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.902 "listen_address": { 00:22:36.902 "trtype": "TCP", 00:22:36.902 "adrfam": "IPv4", 00:22:36.902 "traddr": "10.0.0.2", 00:22:36.902 "trsvcid": "4420" 00:22:36.902 }, 00:22:36.902 "secure_channel": true 00:22:36.902 } 00:22:36.902 } 00:22:36.902 ] 00:22:36.902 } 00:22:36.902 ] 00:22:36.902 }' 00:22:36.902 19:38:02 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:37.163 19:38:03 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:22:37.163 "subsystems": [ 00:22:37.163 { 00:22:37.163 "subsystem": "keyring", 00:22:37.163 "config": [ 00:22:37.163 { 00:22:37.163 "method": "keyring_file_add_key", 00:22:37.163 "params": { 00:22:37.163 "name": "key0", 00:22:37.163 "path": "/tmp/tmp.0gCA0d51z1" 00:22:37.163 } 00:22:37.163 } 00:22:37.163 ] 00:22:37.163 }, 00:22:37.163 { 00:22:37.163 "subsystem": "iobuf", 00:22:37.163 "config": [ 00:22:37.163 { 00:22:37.163 "method": "iobuf_set_options", 00:22:37.163 "params": { 00:22:37.163 "small_pool_count": 8192, 00:22:37.163 "large_pool_count": 1024, 00:22:37.163 "small_bufsize": 8192, 00:22:37.163 "large_bufsize": 135168 00:22:37.163 } 00:22:37.163 } 00:22:37.163 ] 00:22:37.163 }, 00:22:37.163 { 00:22:37.163 "subsystem": "sock", 00:22:37.163 "config": [ 00:22:37.163 { 00:22:37.163 "method": "sock_impl_set_options", 00:22:37.163 "params": { 00:22:37.163 "impl_name": "posix", 00:22:37.163 "recv_buf_size": 2097152, 00:22:37.163 "send_buf_size": 2097152, 00:22:37.163 "enable_recv_pipe": true, 00:22:37.163 "enable_quickack": false, 00:22:37.163 "enable_placement_id": 0, 00:22:37.163 "enable_zerocopy_send_server": true, 00:22:37.163 "enable_zerocopy_send_client": false, 00:22:37.163 "zerocopy_threshold": 0, 00:22:37.163 "tls_version": 0, 00:22:37.163 "enable_ktls": false 00:22:37.163 } 00:22:37.163 }, 00:22:37.163 { 00:22:37.163 "method": "sock_impl_set_options", 00:22:37.163 "params": { 00:22:37.163 "impl_name": "ssl", 00:22:37.163 "recv_buf_size": 4096, 00:22:37.163 "send_buf_size": 4096, 00:22:37.163 "enable_recv_pipe": true, 00:22:37.163 "enable_quickack": false, 00:22:37.163 "enable_placement_id": 0, 00:22:37.163 "enable_zerocopy_send_server": true, 00:22:37.163 "enable_zerocopy_send_client": false, 00:22:37.163 "zerocopy_threshold": 0, 00:22:37.163 "tls_version": 0, 00:22:37.163 "enable_ktls": false 00:22:37.163 } 00:22:37.163 } 00:22:37.163 ] 00:22:37.163 }, 00:22:37.163 { 00:22:37.163 "subsystem": "vmd", 00:22:37.163 "config": [] 00:22:37.163 }, 00:22:37.163 { 00:22:37.163 "subsystem": "accel", 00:22:37.163 "config": [ 00:22:37.163 { 00:22:37.163 "method": "accel_set_options", 00:22:37.163 "params": { 00:22:37.163 "small_cache_size": 128, 00:22:37.163 "large_cache_size": 16, 00:22:37.163 "task_count": 2048, 00:22:37.163 "sequence_count": 2048, 00:22:37.163 "buf_count": 2048 00:22:37.163 } 00:22:37.163 } 00:22:37.163 ] 00:22:37.163 }, 00:22:37.163 { 00:22:37.163 "subsystem": "bdev", 00:22:37.163 "config": [ 00:22:37.163 { 00:22:37.163 "method": "bdev_set_options", 00:22:37.163 "params": { 00:22:37.163 "bdev_io_pool_size": 65535, 00:22:37.163 "bdev_io_cache_size": 256, 00:22:37.163 "bdev_auto_examine": true, 00:22:37.163 "iobuf_small_cache_size": 128, 00:22:37.163 "iobuf_large_cache_size": 16 00:22:37.163 } 00:22:37.163 }, 00:22:37.163 { 00:22:37.163 "method": "bdev_raid_set_options", 00:22:37.163 "params": { 00:22:37.163 "process_window_size_kb": 1024 00:22:37.163 } 00:22:37.163 }, 00:22:37.163 { 00:22:37.163 "method": "bdev_iscsi_set_options", 00:22:37.163 "params": { 00:22:37.163 "timeout_sec": 30 00:22:37.163 } 00:22:37.163 }, 00:22:37.163 { 00:22:37.163 "method": "bdev_nvme_set_options", 00:22:37.163 "params": { 00:22:37.163 "action_on_timeout": "none", 00:22:37.163 "timeout_us": 0, 00:22:37.163 "timeout_admin_us": 0, 00:22:37.163 "keep_alive_timeout_ms": 10000, 00:22:37.163 "arbitration_burst": 0, 00:22:37.163 "low_priority_weight": 0, 00:22:37.163 "medium_priority_weight": 0, 00:22:37.163 "high_priority_weight": 0, 00:22:37.163 "nvme_adminq_poll_period_us": 10000, 00:22:37.163 "nvme_ioq_poll_period_us": 0, 00:22:37.163 "io_queue_requests": 512, 00:22:37.163 "delay_cmd_submit": true, 00:22:37.163 "transport_retry_count": 4, 00:22:37.163 "bdev_retry_count": 3, 00:22:37.163 "transport_ack_timeout": 0, 00:22:37.163 "ctrlr_loss_timeout_sec": 0, 00:22:37.163 "reconnect_delay_sec": 0, 00:22:37.163 "fast_io_fail_timeout_sec": 0, 00:22:37.163 "disable_auto_failback": false, 00:22:37.163 "generate_uuids": false, 00:22:37.163 "transport_tos": 0, 00:22:37.163 "nvme_error_stat": false, 00:22:37.163 "rdma_srq_size": 0, 00:22:37.163 "io_path_stat": false, 00:22:37.163 "allow_accel_sequence": false, 00:22:37.163 "rdma_max_cq_size": 0, 00:22:37.163 "rdma_cm_event_timeout_ms": 0, 00:22:37.163 "dhchap_digests": [ 00:22:37.163 "sha256", 00:22:37.163 "sha384", 00:22:37.163 "sha512" 00:22:37.163 ], 00:22:37.163 "dhchap_dhgroups": [ 00:22:37.163 "null", 00:22:37.163 "ffdhe2048", 00:22:37.163 "ffdhe3072", 00:22:37.163 "ffdhe4096", 00:22:37.163 "ffdhe6144", 00:22:37.163 "ffdhe8192" 00:22:37.163 ] 00:22:37.163 } 00:22:37.163 }, 00:22:37.163 { 00:22:37.163 "method": "bdev_nvme_attach_controller", 00:22:37.163 "params": { 00:22:37.163 "name": "nvme0", 00:22:37.163 "trtype": "TCP", 00:22:37.164 "adrfam": "IPv4", 00:22:37.164 "traddr": "10.0.0.2", 00:22:37.164 "trsvcid": "4420", 00:22:37.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.164 "prchk_reftag": false, 00:22:37.164 "prchk_guard": false, 00:22:37.164 "ctrlr_loss_timeout_sec": 0, 00:22:37.164 "reconnect_delay_sec": 0, 00:22:37.164 "fast_io_fail_timeout_sec": 0, 00:22:37.164 "psk": "key0", 00:22:37.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:37.164 "hdgst": false, 00:22:37.164 "ddgst": false 00:22:37.164 } 00:22:37.164 }, 00:22:37.164 { 00:22:37.164 "method": "bdev_nvme_set_hotplug", 00:22:37.164 "params": { 00:22:37.164 "period_us": 100000, 00:22:37.164 "enable": false 00:22:37.164 } 00:22:37.164 }, 00:22:37.164 { 00:22:37.164 "method": "bdev_enable_histogram", 00:22:37.164 "params": { 00:22:37.164 "name": "nvme0n1", 00:22:37.164 "enable": true 00:22:37.164 } 00:22:37.164 }, 00:22:37.164 { 00:22:37.164 "method": "bdev_wait_for_examine" 00:22:37.164 } 00:22:37.164 ] 00:22:37.164 }, 00:22:37.164 { 00:22:37.164 "subsystem": "nbd", 00:22:37.164 "config": [] 00:22:37.164 } 00:22:37.164 ] 00:22:37.164 }' 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3645719 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3645719 ']' 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3645719 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3645719 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3645719' 00:22:37.164 killing process with pid 3645719 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3645719 00:22:37.164 Received shutdown signal, test time was about 1.000000 seconds 00:22:37.164 00:22:37.164 Latency(us) 00:22:37.164 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.164 =================================================================================================================== 00:22:37.164 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3645719 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3645698 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3645698 ']' 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3645698 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3645698 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3645698' 00:22:37.164 killing process with pid 3645698 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3645698 00:22:37.164 [2024-05-15 19:38:03.339651] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:37.164 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3645698 00:22:37.425 19:38:03 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:22:37.425 19:38:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:37.425 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:37.425 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.425 19:38:03 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:22:37.425 "subsystems": [ 00:22:37.425 { 00:22:37.425 "subsystem": "keyring", 00:22:37.425 "config": [ 00:22:37.425 { 00:22:37.425 "method": "keyring_file_add_key", 00:22:37.425 "params": { 00:22:37.425 "name": "key0", 00:22:37.425 "path": "/tmp/tmp.0gCA0d51z1" 00:22:37.425 } 00:22:37.425 } 00:22:37.425 ] 00:22:37.425 }, 00:22:37.425 { 00:22:37.425 "subsystem": "iobuf", 00:22:37.425 "config": [ 00:22:37.425 { 00:22:37.425 "method": "iobuf_set_options", 00:22:37.425 "params": { 00:22:37.425 "small_pool_count": 8192, 00:22:37.425 "large_pool_count": 1024, 00:22:37.425 "small_bufsize": 8192, 00:22:37.425 "large_bufsize": 135168 00:22:37.425 } 00:22:37.425 } 00:22:37.425 ] 00:22:37.425 }, 00:22:37.425 { 00:22:37.425 "subsystem": "sock", 00:22:37.425 "config": [ 00:22:37.425 { 00:22:37.425 "method": "sock_impl_set_options", 00:22:37.425 "params": { 00:22:37.425 "impl_name": "posix", 00:22:37.425 "recv_buf_size": 2097152, 00:22:37.425 "send_buf_size": 2097152, 00:22:37.425 "enable_recv_pipe": true, 00:22:37.425 "enable_quickack": false, 00:22:37.425 "enable_placement_id": 0, 00:22:37.425 "enable_zerocopy_send_server": true, 00:22:37.425 "enable_zerocopy_send_client": false, 00:22:37.425 "zerocopy_threshold": 0, 00:22:37.425 "tls_version": 0, 00:22:37.425 "enable_ktls": false 00:22:37.425 } 00:22:37.425 }, 00:22:37.425 { 00:22:37.425 "method": "sock_impl_set_options", 00:22:37.425 "params": { 00:22:37.425 "impl_name": "ssl", 00:22:37.425 "recv_buf_size": 4096, 00:22:37.425 "send_buf_size": 4096, 00:22:37.425 "enable_recv_pipe": true, 00:22:37.425 "enable_quickack": false, 00:22:37.425 "enable_placement_id": 0, 00:22:37.425 "enable_zerocopy_send_server": true, 00:22:37.425 "enable_zerocopy_send_client": false, 00:22:37.425 "zerocopy_threshold": 0, 00:22:37.425 "tls_version": 0, 00:22:37.425 "enable_ktls": false 00:22:37.425 } 00:22:37.425 } 00:22:37.425 ] 00:22:37.425 }, 00:22:37.425 { 00:22:37.425 "subsystem": "vmd", 00:22:37.425 "config": [] 00:22:37.425 }, 00:22:37.425 { 00:22:37.425 "subsystem": "accel", 00:22:37.425 "config": [ 00:22:37.425 { 00:22:37.425 "method": "accel_set_options", 00:22:37.425 "params": { 00:22:37.425 "small_cache_size": 128, 00:22:37.425 "large_cache_size": 16, 00:22:37.425 "task_count": 2048, 00:22:37.425 "sequence_count": 2048, 00:22:37.425 "buf_count": 2048 00:22:37.425 } 00:22:37.425 } 00:22:37.425 ] 00:22:37.425 }, 00:22:37.425 { 00:22:37.425 "subsystem": "bdev", 00:22:37.425 "config": [ 00:22:37.425 { 00:22:37.425 "method": "bdev_set_options", 00:22:37.425 "params": { 00:22:37.425 "bdev_io_pool_size": 65535, 00:22:37.425 "bdev_io_cache_size": 256, 00:22:37.425 "bdev_auto_examine": true, 00:22:37.425 "iobuf_small_cache_size": 128, 00:22:37.425 "iobuf_large_cache_size": 16 00:22:37.425 } 00:22:37.425 }, 00:22:37.425 { 00:22:37.425 "method": "bdev_raid_set_options", 00:22:37.425 "params": { 00:22:37.425 "process_window_size_kb": 1024 00:22:37.425 } 00:22:37.425 }, 00:22:37.425 { 00:22:37.425 "method": "bdev_iscsi_set_options", 00:22:37.425 "params": { 00:22:37.425 "timeout_sec": 30 00:22:37.425 } 00:22:37.425 }, 00:22:37.425 { 00:22:37.425 "method": "bdev_nvme_set_options", 00:22:37.425 "params": { 00:22:37.425 "action_on_timeout": "none", 00:22:37.425 "timeout_us": 0, 00:22:37.425 "timeout_admin_us": 0, 00:22:37.425 "keep_alive_timeout_ms": 10000, 00:22:37.425 "arbitration_burst": 0, 00:22:37.425 "low_priority_weight": 0, 00:22:37.425 "medium_priority_weight": 0, 00:22:37.425 "high_priority_weight": 0, 00:22:37.425 "nvme_adminq_poll_period_us": 10000, 00:22:37.425 "nvme_ioq_poll_period_us": 0, 00:22:37.425 "io_queue_requests": 0, 00:22:37.425 "delay_cmd_submit": true, 00:22:37.425 "transport_retry_count": 4, 00:22:37.425 "bdev_retry_count": 3, 00:22:37.425 "transport_ack_timeout": 0, 00:22:37.425 "ctrlr_loss_timeout_sec": 0, 00:22:37.425 "reconnect_delay_sec": 0, 00:22:37.425 "fast_io_fail_timeout_sec": 0, 00:22:37.425 "disable_auto_failback": false, 00:22:37.425 "generate_uuids": false, 00:22:37.425 "transport_tos": 0, 00:22:37.425 "nvme_error_stat": false, 00:22:37.425 "rdma_srq_size": 0, 00:22:37.425 "io_path_stat": false, 00:22:37.425 "allow_accel_sequence": false, 00:22:37.425 "rdma_max_cq_size": 0, 00:22:37.425 "rdma_cm_event_timeout_ms": 0, 00:22:37.425 "dhchap_digests": [ 00:22:37.425 "sha256", 00:22:37.425 "sha384", 00:22:37.425 "sha512" 00:22:37.425 ], 00:22:37.425 "dhchap_dhgroups": [ 00:22:37.425 "null", 00:22:37.425 "ffdhe2048", 00:22:37.425 "ffdhe3072", 00:22:37.426 "ffdhe4096", 00:22:37.426 "ffdhe6144", 00:22:37.426 "ffdhe8192" 00:22:37.426 ] 00:22:37.426 } 00:22:37.426 }, 00:22:37.426 { 00:22:37.426 "method": "bdev_nvme_set_hotplug", 00:22:37.426 "params": { 00:22:37.426 "period_us": 100000, 00:22:37.426 "enable": false 00:22:37.426 } 00:22:37.426 }, 00:22:37.426 { 00:22:37.426 "method": "bdev_malloc_create", 00:22:37.426 "params": { 00:22:37.426 "name": "malloc0", 00:22:37.426 "num_blocks": 8192, 00:22:37.426 "block_size": 4096, 00:22:37.426 "physical_block_size": 4096, 00:22:37.426 "uuid": "5d62884e-01e6-4ea6-810f-5855c935199e", 00:22:37.426 "optimal_io_boundary": 0 00:22:37.426 } 00:22:37.426 }, 00:22:37.426 { 00:22:37.426 "method": "bdev_wait_for_examine" 00:22:37.426 } 00:22:37.426 ] 00:22:37.426 }, 00:22:37.426 { 00:22:37.426 "subsystem": "nbd", 00:22:37.426 "config": [] 00:22:37.426 }, 00:22:37.426 { 00:22:37.426 "subsystem": "scheduler", 00:22:37.426 "config": [ 00:22:37.426 { 00:22:37.426 "method": "framework_set_scheduler", 00:22:37.426 "params": { 00:22:37.426 "name": "static" 00:22:37.426 } 00:22:37.426 } 00:22:37.426 ] 00:22:37.426 }, 00:22:37.426 { 00:22:37.426 "subsystem": "nvmf", 00:22:37.426 "config": [ 00:22:37.426 { 00:22:37.426 "method": "nvmf_set_config", 00:22:37.426 "params": { 00:22:37.426 "discovery_filter": "match_any", 00:22:37.426 "admin_cmd_passthru": { 00:22:37.426 "identify_ctrlr": false 00:22:37.426 } 00:22:37.426 } 00:22:37.426 }, 00:22:37.426 { 00:22:37.426 "method": "nvmf_set_max_subsystems", 00:22:37.426 "params": { 00:22:37.426 "max_subsystems": 1024 00:22:37.426 } 00:22:37.426 }, 00:22:37.426 { 00:22:37.426 "method": "nvmf_set_crdt", 00:22:37.426 "params": { 00:22:37.426 "crdt1": 0, 00:22:37.426 "crdt2": 0, 00:22:37.426 "crdt3": 0 00:22:37.426 } 00:22:37.426 }, 00:22:37.426 { 00:22:37.426 "method": "nvmf_create_transport", 00:22:37.426 "params": { 00:22:37.426 "trtype": "TCP", 00:22:37.426 "max_queue_depth": 128, 00:22:37.426 "max_io_qpairs_per_ctrlr": 127, 00:22:37.426 "in_capsule_data_size": 4096, 00:22:37.426 "max_io_size": 131072, 00:22:37.426 "io_unit_size": 131072, 00:22:37.426 "max_aq_depth": 128, 00:22:37.426 "num_shared_buffers": 511, 00:22:37.426 "buf_cache_size": 4294967295, 00:22:37.426 "dif_insert_or_strip": false, 00:22:37.426 "zcopy": false, 00:22:37.426 "c2h_success": false, 00:22:37.426 "sock_priority": 0, 00:22:37.426 "abort_timeout_sec": 1, 00:22:37.426 "ack_timeout": 0, 00:22:37.426 "data_wr_pool_size": 0 00:22:37.426 } 00:22:37.426 }, 00:22:37.426 { 00:22:37.426 "method": "nvmf_create_subsystem", 00:22:37.426 "params": { 00:22:37.426 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.426 "allow_any_host": false, 00:22:37.426 "serial_number": "00000000000000000000", 00:22:37.426 "model_number": "SPDK bdev Controller", 00:22:37.426 "max_namespaces": 32, 00:22:37.426 "min_cntlid": 1, 00:22:37.426 "max_cntlid": 65519, 00:22:37.426 "ana_reporting": false 00:22:37.426 } 00:22:37.426 }, 00:22:37.426 { 00:22:37.426 "method": "nvmf_subsystem_add_host", 00:22:37.426 "params": { 00:22:37.426 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.426 "host": "nqn.2016-06.io.spdk:host1", 00:22:37.426 "psk": "key0" 00:22:37.426 } 00:22:37.426 }, 00:22:37.426 { 00:22:37.426 "method": "nvmf_subsystem_add_ns", 00:22:37.426 "params": { 00:22:37.426 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.426 "namespace": { 00:22:37.426 "nsid": 1, 00:22:37.426 "bdev_name": "malloc0", 00:22:37.426 "nguid": "5D62884E01E64EA6810F5855C935199E", 00:22:37.426 "uuid": "5d62884e-01e6-4ea6-810f-5855c935199e", 00:22:37.426 "no_auto_visible": false 00:22:37.426 } 00:22:37.426 } 00:22:37.426 }, 00:22:37.426 { 00:22:37.426 "method": "nvmf_subsystem_add_listener", 00:22:37.426 "params": { 00:22:37.426 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.426 "listen_address": { 00:22:37.426 "trtype": "TCP", 00:22:37.426 "adrfam": "IPv4", 00:22:37.426 "traddr": "10.0.0.2", 00:22:37.426 "trsvcid": "4420" 00:22:37.426 }, 00:22:37.426 "secure_channel": true 00:22:37.426 } 00:22:37.426 } 00:22:37.426 ] 00:22:37.426 } 00:22:37.426 ] 00:22:37.426 }' 00:22:37.426 19:38:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3646246 00:22:37.426 19:38:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3646246 00:22:37.426 19:38:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:37.426 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3646246 ']' 00:22:37.426 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.426 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:37.426 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.426 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:37.426 19:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.426 [2024-05-15 19:38:03.542837] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:22:37.426 [2024-05-15 19:38:03.542892] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.426 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.686 [2024-05-15 19:38:03.631147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.686 [2024-05-15 19:38:03.694709] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.686 [2024-05-15 19:38:03.694747] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.686 [2024-05-15 19:38:03.694754] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.686 [2024-05-15 19:38:03.694761] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.686 [2024-05-15 19:38:03.694768] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.686 [2024-05-15 19:38:03.694819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.946 [2024-05-15 19:38:03.883859] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.946 [2024-05-15 19:38:03.915840] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:37.947 [2024-05-15 19:38:03.915887] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:37.947 [2024-05-15 19:38:03.923639] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.519 19:38:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:38.519 19:38:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:38.519 19:38:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:38.519 19:38:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:38.519 19:38:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.519 19:38:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.519 19:38:04 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3646431 00:22:38.519 19:38:04 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3646431 /var/tmp/bdevperf.sock 00:22:38.519 19:38:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3646431 ']' 00:22:38.519 19:38:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:38.519 19:38:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:38.519 19:38:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:38.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:38.519 19:38:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:38.519 19:38:04 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:38.519 19:38:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.519 19:38:04 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:22:38.519 "subsystems": [ 00:22:38.519 { 00:22:38.519 "subsystem": "keyring", 00:22:38.519 "config": [ 00:22:38.519 { 00:22:38.519 "method": "keyring_file_add_key", 00:22:38.519 "params": { 00:22:38.519 "name": "key0", 00:22:38.519 "path": "/tmp/tmp.0gCA0d51z1" 00:22:38.519 } 00:22:38.519 } 00:22:38.519 ] 00:22:38.519 }, 00:22:38.519 { 00:22:38.519 "subsystem": "iobuf", 00:22:38.519 "config": [ 00:22:38.519 { 00:22:38.519 "method": "iobuf_set_options", 00:22:38.519 "params": { 00:22:38.519 "small_pool_count": 8192, 00:22:38.519 "large_pool_count": 1024, 00:22:38.519 "small_bufsize": 8192, 00:22:38.519 "large_bufsize": 135168 00:22:38.519 } 00:22:38.519 } 00:22:38.519 ] 00:22:38.519 }, 00:22:38.519 { 00:22:38.519 "subsystem": "sock", 00:22:38.519 "config": [ 00:22:38.519 { 00:22:38.519 "method": "sock_impl_set_options", 00:22:38.519 "params": { 00:22:38.519 "impl_name": "posix", 00:22:38.519 "recv_buf_size": 2097152, 00:22:38.519 "send_buf_size": 2097152, 00:22:38.519 "enable_recv_pipe": true, 00:22:38.519 "enable_quickack": false, 00:22:38.519 "enable_placement_id": 0, 00:22:38.519 "enable_zerocopy_send_server": true, 00:22:38.519 "enable_zerocopy_send_client": false, 00:22:38.519 "zerocopy_threshold": 0, 00:22:38.519 "tls_version": 0, 00:22:38.519 "enable_ktls": false 00:22:38.519 } 00:22:38.519 }, 00:22:38.519 { 00:22:38.519 "method": "sock_impl_set_options", 00:22:38.519 "params": { 00:22:38.519 "impl_name": "ssl", 00:22:38.519 "recv_buf_size": 4096, 00:22:38.519 "send_buf_size": 4096, 00:22:38.519 "enable_recv_pipe": true, 00:22:38.519 "enable_quickack": false, 00:22:38.519 "enable_placement_id": 0, 00:22:38.519 "enable_zerocopy_send_server": true, 00:22:38.519 "enable_zerocopy_send_client": false, 00:22:38.519 "zerocopy_threshold": 0, 00:22:38.519 "tls_version": 0, 00:22:38.519 "enable_ktls": false 00:22:38.519 } 00:22:38.519 } 00:22:38.519 ] 00:22:38.519 }, 00:22:38.519 { 00:22:38.519 "subsystem": "vmd", 00:22:38.519 "config": [] 00:22:38.519 }, 00:22:38.519 { 00:22:38.519 "subsystem": "accel", 00:22:38.519 "config": [ 00:22:38.519 { 00:22:38.519 "method": "accel_set_options", 00:22:38.519 "params": { 00:22:38.519 "small_cache_size": 128, 00:22:38.519 "large_cache_size": 16, 00:22:38.519 "task_count": 2048, 00:22:38.519 "sequence_count": 2048, 00:22:38.519 "buf_count": 2048 00:22:38.519 } 00:22:38.519 } 00:22:38.519 ] 00:22:38.519 }, 00:22:38.519 { 00:22:38.519 "subsystem": "bdev", 00:22:38.519 "config": [ 00:22:38.519 { 00:22:38.519 "method": "bdev_set_options", 00:22:38.519 "params": { 00:22:38.519 "bdev_io_pool_size": 65535, 00:22:38.519 "bdev_io_cache_size": 256, 00:22:38.519 "bdev_auto_examine": true, 00:22:38.519 "iobuf_small_cache_size": 128, 00:22:38.519 "iobuf_large_cache_size": 16 00:22:38.519 } 00:22:38.519 }, 00:22:38.519 { 00:22:38.519 "method": "bdev_raid_set_options", 00:22:38.519 "params": { 00:22:38.519 "process_window_size_kb": 1024 00:22:38.519 } 00:22:38.519 }, 00:22:38.519 { 00:22:38.519 "method": "bdev_iscsi_set_options", 00:22:38.519 "params": { 00:22:38.519 "timeout_sec": 30 00:22:38.520 } 00:22:38.520 }, 00:22:38.520 { 00:22:38.520 "method": "bdev_nvme_set_options", 00:22:38.520 "params": { 00:22:38.520 "action_on_timeout": "none", 00:22:38.520 "timeout_us": 0, 00:22:38.520 "timeout_admin_us": 0, 00:22:38.520 "keep_alive_timeout_ms": 10000, 00:22:38.520 "arbitration_burst": 0, 00:22:38.520 "low_priority_weight": 0, 00:22:38.520 "medium_priority_weight": 0, 00:22:38.520 "high_priority_weight": 0, 00:22:38.520 "nvme_adminq_poll_period_us": 10000, 00:22:38.520 "nvme_ioq_poll_period_us": 0, 00:22:38.520 "io_queue_requests": 512, 00:22:38.520 "delay_cmd_submit": true, 00:22:38.520 "transport_retry_count": 4, 00:22:38.520 "bdev_retry_count": 3, 00:22:38.520 "transport_ack_timeout": 0, 00:22:38.520 "ctrlr_loss_timeout_sec": 0, 00:22:38.520 "reconnect_delay_sec": 0, 00:22:38.520 "fast_io_fail_timeout_sec": 0, 00:22:38.520 "disable_auto_failback": false, 00:22:38.520 "generate_uuids": false, 00:22:38.520 "transport_tos": 0, 00:22:38.520 "nvme_error_stat": false, 00:22:38.520 "rdma_srq_size": 0, 00:22:38.520 "io_path_stat": false, 00:22:38.520 "allow_accel_sequence": false, 00:22:38.520 "rdma_max_cq_size": 0, 00:22:38.520 "rdma_cm_event_timeout_ms": 0, 00:22:38.520 "dhchap_digests": [ 00:22:38.520 "sha256", 00:22:38.520 "sha384", 00:22:38.520 "sha512" 00:22:38.520 ], 00:22:38.520 "dhchap_dhgroups": [ 00:22:38.520 "null", 00:22:38.520 "ffdhe2048", 00:22:38.520 "ffdhe3072", 00:22:38.520 "ffdhe4096", 00:22:38.520 "ffdhe6144", 00:22:38.520 "ffdhe8192" 00:22:38.520 ] 00:22:38.520 } 00:22:38.520 }, 00:22:38.520 { 00:22:38.520 "method": "bdev_nvme_attach_controller", 00:22:38.520 "params": { 00:22:38.520 "name": "nvme0", 00:22:38.520 "trtype": "TCP", 00:22:38.520 "adrfam": "IPv4", 00:22:38.520 "traddr": "10.0.0.2", 00:22:38.520 "trsvcid": "4420", 00:22:38.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.520 "prchk_reftag": false, 00:22:38.520 "prchk_guard": false, 00:22:38.520 "ctrlr_loss_timeout_sec": 0, 00:22:38.520 "reconnect_delay_sec": 0, 00:22:38.520 "fast_io_fail_timeout_sec": 0, 00:22:38.520 "psk": "key0", 00:22:38.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:38.520 "hdgst": false, 00:22:38.520 "ddgst": false 00:22:38.520 } 00:22:38.520 }, 00:22:38.520 { 00:22:38.520 "method": "bdev_nvme_set_hotplug", 00:22:38.520 "params": { 00:22:38.520 "period_us": 100000, 00:22:38.520 "enable": false 00:22:38.520 } 00:22:38.520 }, 00:22:38.520 { 00:22:38.520 "method": "bdev_enable_histogram", 00:22:38.520 "params": { 00:22:38.520 "name": "nvme0n1", 00:22:38.520 "enable": true 00:22:38.520 } 00:22:38.520 }, 00:22:38.520 { 00:22:38.520 "method": "bdev_wait_for_examine" 00:22:38.520 } 00:22:38.520 ] 00:22:38.520 }, 00:22:38.520 { 00:22:38.520 "subsystem": "nbd", 00:22:38.520 "config": [] 00:22:38.520 } 00:22:38.520 ] 00:22:38.520 }' 00:22:38.520 [2024-05-15 19:38:04.489249] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:22:38.520 [2024-05-15 19:38:04.489299] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646431 ] 00:22:38.520 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.520 [2024-05-15 19:38:04.552905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.520 [2024-05-15 19:38:04.616801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.781 [2024-05-15 19:38:04.747247] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:39.352 19:38:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:39.352 19:38:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:39.352 19:38:05 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:39.352 19:38:05 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:22:39.613 19:38:05 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.613 19:38:05 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:39.613 Running I/O for 1 seconds... 00:22:40.556 00:22:40.556 Latency(us) 00:22:40.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.556 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:40.556 Verification LBA range: start 0x0 length 0x2000 00:22:40.556 nvme0n1 : 1.07 2057.51 8.04 0.00 0.00 60506.19 9120.43 124081.49 00:22:40.556 =================================================================================================================== 00:22:40.556 Total : 2057.51 8.04 0.00 0.00 60506.19 9120.43 124081.49 00:22:40.556 0 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:40.817 nvmf_trace.0 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3646431 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3646431 ']' 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3646431 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3646431 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3646431' 00:22:40.817 killing process with pid 3646431 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3646431 00:22:40.817 Received shutdown signal, test time was about 1.000000 seconds 00:22:40.817 00:22:40.817 Latency(us) 00:22:40.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.817 =================================================================================================================== 00:22:40.817 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:40.817 19:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3646431 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:41.078 rmmod nvme_tcp 00:22:41.078 rmmod nvme_fabrics 00:22:41.078 rmmod nvme_keyring 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3646246 ']' 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3646246 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3646246 ']' 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3646246 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3646246 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3646246' 00:22:41.078 killing process with pid 3646246 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3646246 00:22:41.078 [2024-05-15 19:38:07.167828] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:41.078 19:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3646246 00:22:41.339 19:38:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:41.339 19:38:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:41.339 19:38:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:41.339 19:38:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:41.339 19:38:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:41.339 19:38:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.339 19:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.339 19:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.254 19:38:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:43.254 19:38:09 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.1H23aOuBJo /tmp/tmp.vH6vcTP6MA /tmp/tmp.0gCA0d51z1 00:22:43.254 00:22:43.254 real 1m23.037s 00:22:43.254 user 2m7.523s 00:22:43.254 sys 0m28.247s 00:22:43.254 19:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:43.254 19:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.254 ************************************ 00:22:43.254 END TEST nvmf_tls 00:22:43.254 ************************************ 00:22:43.254 19:38:09 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:43.254 19:38:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:43.254 19:38:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:43.254 19:38:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:43.515 ************************************ 00:22:43.515 START TEST nvmf_fips 00:22:43.515 ************************************ 00:22:43.515 19:38:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:43.515 * Looking for test storage... 00:22:43.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:43.515 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.515 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:43.515 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.515 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.515 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:43.516 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:43.777 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:22:43.778 Error setting digest 00:22:43.778 0012A27E007F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:43.778 0012A27E007F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:43.778 19:38:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:51.924 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:51.924 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:51.924 Found net devices under 0000:31:00.0: cvl_0_0 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:51.924 Found net devices under 0000:31:00.1: cvl_0_1 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:51.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:22:51.924 00:22:51.924 --- 10.0.0.2 ping statistics --- 00:22:51.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.924 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.353 ms 00:22:51.924 00:22:51.924 --- 10.0.0.1 ping statistics --- 00:22:51.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.924 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:51.924 19:38:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:51.924 19:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3651734 00:22:51.924 19:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3651734 00:22:51.924 19:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3651734 ']' 00:22:51.924 19:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:51.924 19:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.924 19:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:51.924 19:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.924 19:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:51.924 19:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:51.924 [2024-05-15 19:38:18.080491] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:22:51.924 [2024-05-15 19:38:18.080560] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.186 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.186 [2024-05-15 19:38:18.157893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.186 [2024-05-15 19:38:18.230405] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.186 [2024-05-15 19:38:18.230440] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.186 [2024-05-15 19:38:18.230449] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.186 [2024-05-15 19:38:18.230456] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.186 [2024-05-15 19:38:18.230462] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.186 [2024-05-15 19:38:18.230481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.758 19:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:52.758 19:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:22:52.758 19:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:52.758 19:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:52.758 19:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:53.019 19:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.019 19:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:53.019 19:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:53.019 19:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:53.019 19:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:53.019 19:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:53.019 19:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:53.019 19:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:53.019 19:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:53.019 [2024-05-15 19:38:19.153774] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.019 [2024-05-15 19:38:19.169765] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:53.019 [2024-05-15 19:38:19.169808] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:53.019 [2024-05-15 19:38:19.169960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.019 [2024-05-15 19:38:19.196499] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:53.019 malloc0 00:22:53.281 19:38:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:53.281 19:38:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3651852 00:22:53.281 19:38:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3651852 /var/tmp/bdevperf.sock 00:22:53.281 19:38:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:53.281 19:38:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3651852 ']' 00:22:53.281 19:38:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.281 19:38:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:53.281 19:38:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.281 19:38:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:53.281 19:38:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:53.281 [2024-05-15 19:38:19.302240] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:22:53.281 [2024-05-15 19:38:19.302292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3651852 ] 00:22:53.281 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.281 [2024-05-15 19:38:19.356503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.281 [2024-05-15 19:38:19.408566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.542 19:38:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:53.542 19:38:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:22:53.542 19:38:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:53.542 [2024-05-15 19:38:19.671933] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.542 [2024-05-15 19:38:19.671995] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:53.803 TLSTESTn1 00:22:53.803 19:38:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:53.803 Running I/O for 10 seconds... 00:23:03.803 00:23:03.803 Latency(us) 00:23:03.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.803 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:03.803 Verification LBA range: start 0x0 length 0x2000 00:23:03.803 TLSTESTn1 : 10.03 3390.49 13.24 0.00 0.00 37689.25 4833.28 55268.69 00:23:03.803 =================================================================================================================== 00:23:03.803 Total : 3390.49 13.24 0.00 0.00 37689.25 4833.28 55268.69 00:23:03.803 0 00:23:03.803 19:38:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:03.803 19:38:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:03.803 19:38:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:23:03.803 19:38:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:23:03.803 19:38:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:03.803 19:38:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:03.803 19:38:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:03.803 19:38:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:03.803 19:38:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:03.803 19:38:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:03.803 nvmf_trace.0 00:23:04.063 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:23:04.063 19:38:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3651852 00:23:04.063 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3651852 ']' 00:23:04.063 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3651852 00:23:04.063 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:04.063 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:04.063 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3651852 00:23:04.063 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:04.063 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:04.063 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3651852' 00:23:04.063 killing process with pid 3651852 00:23:04.063 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3651852 00:23:04.063 Received shutdown signal, test time was about 10.000000 seconds 00:23:04.063 00:23:04.063 Latency(us) 00:23:04.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.063 =================================================================================================================== 00:23:04.063 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:04.063 [2024-05-15 19:38:30.101833] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:04.063 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3651852 00:23:04.063 19:38:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:04.063 19:38:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:04.063 19:38:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:04.063 19:38:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:04.063 19:38:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:04.063 19:38:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:04.063 19:38:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:04.063 rmmod nvme_tcp 00:23:04.063 rmmod nvme_fabrics 00:23:04.322 rmmod nvme_keyring 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3651734 ']' 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3651734 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3651734 ']' 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3651734 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3651734 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3651734' 00:23:04.323 killing process with pid 3651734 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3651734 00:23:04.323 [2024-05-15 19:38:30.346926] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:04.323 [2024-05-15 19:38:30.346964] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3651734 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.323 19:38:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.873 19:38:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:06.873 19:38:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:06.873 00:23:06.873 real 0m23.094s 00:23:06.873 user 0m22.878s 00:23:06.873 sys 0m10.450s 00:23:06.873 19:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:06.873 19:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:06.873 ************************************ 00:23:06.873 END TEST nvmf_fips 00:23:06.873 ************************************ 00:23:06.873 19:38:32 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:23:06.873 19:38:32 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:23:06.873 19:38:32 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:23:06.873 19:38:32 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:23:06.873 19:38:32 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:23:06.873 19:38:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:15.007 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:15.007 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:15.007 Found net devices under 0000:31:00.0: cvl_0_0 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:15.007 Found net devices under 0000:31:00.1: cvl_0_1 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:23:15.007 19:38:40 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:15.007 19:38:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:15.007 19:38:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:15.007 19:38:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:15.007 ************************************ 00:23:15.007 START TEST nvmf_perf_adq 00:23:15.007 ************************************ 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:15.007 * Looking for test storage... 00:23:15.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:15.007 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:15.008 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.008 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.008 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.008 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:15.008 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:15.008 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:15.008 19:38:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:15.008 19:38:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:15.008 19:38:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:23.143 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:23.143 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:23.143 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:23.144 Found net devices under 0000:31:00.0: cvl_0_0 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:23.144 Found net devices under 0000:31:00.1: cvl_0_1 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:23:23.144 19:38:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:24.586 19:38:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:26.529 19:38:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:31.820 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:31.820 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:31.820 Found net devices under 0000:31:00.0: cvl_0_0 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:31.820 Found net devices under 0000:31:00.1: cvl_0_1 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:31.820 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:31.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:31.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:23:31.821 00:23:31.821 --- 10.0.0.2 ping statistics --- 00:23:31.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.821 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:31.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:31.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.421 ms 00:23:31.821 00:23:31.821 --- 10.0.0.1 ping statistics --- 00:23:31.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.821 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3664756 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3664756 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3664756 ']' 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:31.821 19:38:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:31.821 [2024-05-15 19:38:57.947446] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:23:31.821 [2024-05-15 19:38:57.947497] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.821 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.081 [2024-05-15 19:38:58.039384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:32.081 [2024-05-15 19:38:58.105225] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.081 [2024-05-15 19:38:58.105272] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.081 [2024-05-15 19:38:58.105280] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.081 [2024-05-15 19:38:58.105287] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.081 [2024-05-15 19:38:58.105292] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.081 [2024-05-15 19:38:58.105343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.081 [2024-05-15 19:38:58.105420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.081 [2024-05-15 19:38:58.105678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:32.081 [2024-05-15 19:38:58.105678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.654 19:38:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:32.654 19:38:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:23:32.654 19:38:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:32.654 19:38:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:32.654 19:38:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.914 19:38:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.914 19:38:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:23:32.915 19:38:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:32.915 19:38:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:32.915 19:38:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.915 19:38:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.915 19:38:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.915 19:38:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:32.915 19:38:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:32.915 19:38:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.915 19:38:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.915 19:38:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.915 19:38:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:32.915 19:38:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.915 19:38:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.915 19:38:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.915 19:38:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:32.915 19:38:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.915 19:38:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.915 [2024-05-15 19:38:58.997276] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.915 19:38:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.915 19:38:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:32.915 19:38:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.915 19:38:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.915 Malloc1 00:23:32.915 19:38:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.915 19:38:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:32.915 19:38:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.915 19:38:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.915 19:38:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.915 19:38:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:32.915 19:38:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.915 19:38:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.915 19:38:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.915 19:38:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:32.915 19:38:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.915 19:38:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.915 [2024-05-15 19:38:59.056386] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:32.915 [2024-05-15 19:38:59.056623] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.915 19:38:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.915 19:38:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3665106 00:23:32.915 19:38:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:23:32.915 19:38:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:32.915 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.457 19:39:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:23:35.457 19:39:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.458 19:39:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:35.458 19:39:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.458 19:39:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:23:35.458 "tick_rate": 2400000000, 00:23:35.458 "poll_groups": [ 00:23:35.458 { 00:23:35.458 "name": "nvmf_tgt_poll_group_000", 00:23:35.458 "admin_qpairs": 1, 00:23:35.458 "io_qpairs": 1, 00:23:35.458 "current_admin_qpairs": 1, 00:23:35.458 "current_io_qpairs": 1, 00:23:35.458 "pending_bdev_io": 0, 00:23:35.458 "completed_nvme_io": 19498, 00:23:35.458 "transports": [ 00:23:35.458 { 00:23:35.458 "trtype": "TCP" 00:23:35.458 } 00:23:35.458 ] 00:23:35.458 }, 00:23:35.458 { 00:23:35.458 "name": "nvmf_tgt_poll_group_001", 00:23:35.458 "admin_qpairs": 0, 00:23:35.458 "io_qpairs": 1, 00:23:35.458 "current_admin_qpairs": 0, 00:23:35.458 "current_io_qpairs": 1, 00:23:35.458 "pending_bdev_io": 0, 00:23:35.458 "completed_nvme_io": 27622, 00:23:35.458 "transports": [ 00:23:35.458 { 00:23:35.458 "trtype": "TCP" 00:23:35.458 } 00:23:35.458 ] 00:23:35.458 }, 00:23:35.458 { 00:23:35.458 "name": "nvmf_tgt_poll_group_002", 00:23:35.458 "admin_qpairs": 0, 00:23:35.458 "io_qpairs": 1, 00:23:35.458 "current_admin_qpairs": 0, 00:23:35.458 "current_io_qpairs": 1, 00:23:35.458 "pending_bdev_io": 0, 00:23:35.458 "completed_nvme_io": 19887, 00:23:35.458 "transports": [ 00:23:35.458 { 00:23:35.458 "trtype": "TCP" 00:23:35.458 } 00:23:35.458 ] 00:23:35.458 }, 00:23:35.458 { 00:23:35.458 "name": "nvmf_tgt_poll_group_003", 00:23:35.458 "admin_qpairs": 0, 00:23:35.458 "io_qpairs": 1, 00:23:35.458 "current_admin_qpairs": 0, 00:23:35.458 "current_io_qpairs": 1, 00:23:35.458 "pending_bdev_io": 0, 00:23:35.458 "completed_nvme_io": 20728, 00:23:35.458 "transports": [ 00:23:35.458 { 00:23:35.458 "trtype": "TCP" 00:23:35.458 } 00:23:35.458 ] 00:23:35.458 } 00:23:35.458 ] 00:23:35.458 }' 00:23:35.458 19:39:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:35.458 19:39:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:23:35.458 19:39:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:23:35.458 19:39:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:23:35.458 19:39:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3665106 00:23:43.588 Initializing NVMe Controllers 00:23:43.588 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:43.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:43.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:43.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:43.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:43.588 Initialization complete. Launching workers. 00:23:43.588 ======================================================== 00:23:43.588 Latency(us) 00:23:43.588 Device Information : IOPS MiB/s Average min max 00:23:43.588 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11036.44 43.11 5800.76 1807.26 9246.92 00:23:43.588 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14760.12 57.66 4335.72 914.07 9844.51 00:23:43.588 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10656.15 41.63 6006.48 1964.69 11622.29 00:23:43.588 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10310.85 40.28 6207.85 1392.94 12067.58 00:23:43.588 ======================================================== 00:23:43.588 Total : 46763.56 182.67 5474.98 914.07 12067.58 00:23:43.588 00:23:43.588 19:39:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:23:43.588 19:39:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:43.588 19:39:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:43.588 19:39:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:43.588 19:39:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:43.588 19:39:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:43.588 19:39:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:43.588 rmmod nvme_tcp 00:23:43.588 rmmod nvme_fabrics 00:23:43.588 rmmod nvme_keyring 00:23:43.588 19:39:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:43.588 19:39:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:43.588 19:39:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:43.588 19:39:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3664756 ']' 00:23:43.588 19:39:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3664756 00:23:43.588 19:39:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3664756 ']' 00:23:43.588 19:39:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3664756 00:23:43.588 19:39:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:23:43.588 19:39:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:43.588 19:39:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3664756 00:23:43.588 19:39:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:43.588 19:39:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:43.589 19:39:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3664756' 00:23:43.589 killing process with pid 3664756 00:23:43.589 19:39:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3664756 00:23:43.589 [2024-05-15 19:39:09.350762] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:43.589 19:39:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3664756 00:23:43.589 19:39:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:43.589 19:39:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:43.589 19:39:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:43.589 19:39:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:43.589 19:39:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:43.589 19:39:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.589 19:39:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.589 19:39:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.498 19:39:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:45.498 19:39:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:23:45.498 19:39:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:47.404 19:39:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:49.319 19:39:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:54.610 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:54.610 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:54.610 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:54.611 Found net devices under 0000:31:00.0: cvl_0_0 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:54.611 Found net devices under 0000:31:00.1: cvl_0_1 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:54.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:23:54.611 00:23:54.611 --- 10.0.0.2 ping statistics --- 00:23:54.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.611 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:23:54.611 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:54.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:23:54.896 00:23:54.896 --- 10.0.0.1 ping statistics --- 00:23:54.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.896 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:23:54.896 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.896 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:54.896 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:54.896 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.896 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:54.896 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:54.896 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.896 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:54.896 19:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:54.896 19:39:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:23:54.896 19:39:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:54.896 19:39:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:54.896 19:39:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:54.896 net.core.busy_poll = 1 00:23:54.896 19:39:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:54.896 net.core.busy_read = 1 00:23:54.896 19:39:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:54.896 19:39:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:54.896 19:39:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:55.158 19:39:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:55.158 19:39:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:55.158 19:39:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:55.158 19:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:55.158 19:39:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:55.158 19:39:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:55.158 19:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3670157 00:23:55.158 19:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3670157 00:23:55.158 19:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:55.158 19:39:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3670157 ']' 00:23:55.158 19:39:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.158 19:39:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:55.158 19:39:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.158 19:39:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:55.158 19:39:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:55.158 [2024-05-15 19:39:21.183960] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:23:55.158 [2024-05-15 19:39:21.184015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.158 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.158 [2024-05-15 19:39:21.280332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:55.418 [2024-05-15 19:39:21.378096] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.418 [2024-05-15 19:39:21.378159] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.418 [2024-05-15 19:39:21.378168] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.418 [2024-05-15 19:39:21.378175] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.418 [2024-05-15 19:39:21.378182] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.418 [2024-05-15 19:39:21.378330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.418 [2024-05-15 19:39:21.378438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.418 [2024-05-15 19:39:21.378766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:55.418 [2024-05-15 19:39:21.378770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:56.356 [2024-05-15 19:39:22.236621] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:56.356 Malloc1 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:56.356 [2024-05-15 19:39:22.295779] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:56.356 [2024-05-15 19:39:22.296029] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3670502 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:23:56.356 19:39:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:56.356 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.280 19:39:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:23:58.280 19:39:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.280 19:39:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:58.280 19:39:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.280 19:39:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:23:58.280 "tick_rate": 2400000000, 00:23:58.280 "poll_groups": [ 00:23:58.280 { 00:23:58.280 "name": "nvmf_tgt_poll_group_000", 00:23:58.280 "admin_qpairs": 1, 00:23:58.280 "io_qpairs": 3, 00:23:58.280 "current_admin_qpairs": 1, 00:23:58.280 "current_io_qpairs": 3, 00:23:58.280 "pending_bdev_io": 0, 00:23:58.280 "completed_nvme_io": 29987, 00:23:58.280 "transports": [ 00:23:58.280 { 00:23:58.280 "trtype": "TCP" 00:23:58.280 } 00:23:58.280 ] 00:23:58.280 }, 00:23:58.280 { 00:23:58.280 "name": "nvmf_tgt_poll_group_001", 00:23:58.280 "admin_qpairs": 0, 00:23:58.280 "io_qpairs": 1, 00:23:58.280 "current_admin_qpairs": 0, 00:23:58.280 "current_io_qpairs": 1, 00:23:58.280 "pending_bdev_io": 0, 00:23:58.280 "completed_nvme_io": 35533, 00:23:58.280 "transports": [ 00:23:58.280 { 00:23:58.280 "trtype": "TCP" 00:23:58.280 } 00:23:58.280 ] 00:23:58.280 }, 00:23:58.280 { 00:23:58.280 "name": "nvmf_tgt_poll_group_002", 00:23:58.280 "admin_qpairs": 0, 00:23:58.280 "io_qpairs": 0, 00:23:58.280 "current_admin_qpairs": 0, 00:23:58.280 "current_io_qpairs": 0, 00:23:58.280 "pending_bdev_io": 0, 00:23:58.280 "completed_nvme_io": 0, 00:23:58.280 "transports": [ 00:23:58.280 { 00:23:58.280 "trtype": "TCP" 00:23:58.280 } 00:23:58.280 ] 00:23:58.280 }, 00:23:58.280 { 00:23:58.280 "name": "nvmf_tgt_poll_group_003", 00:23:58.280 "admin_qpairs": 0, 00:23:58.280 "io_qpairs": 0, 00:23:58.280 "current_admin_qpairs": 0, 00:23:58.280 "current_io_qpairs": 0, 00:23:58.280 "pending_bdev_io": 0, 00:23:58.280 "completed_nvme_io": 0, 00:23:58.280 "transports": [ 00:23:58.280 { 00:23:58.280 "trtype": "TCP" 00:23:58.280 } 00:23:58.280 ] 00:23:58.280 } 00:23:58.280 ] 00:23:58.280 }' 00:23:58.280 19:39:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:58.280 19:39:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:23:58.280 19:39:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:23:58.280 19:39:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:23:58.280 19:39:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3670502 00:24:06.420 Initializing NVMe Controllers 00:24:06.420 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:06.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:06.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:06.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:06.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:06.420 Initialization complete. Launching workers. 00:24:06.420 ======================================================== 00:24:06.421 Latency(us) 00:24:06.421 Device Information : IOPS MiB/s Average min max 00:24:06.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5393.80 21.07 11870.74 1810.85 58541.73 00:24:06.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 19060.69 74.46 3357.42 1350.51 8728.32 00:24:06.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6069.60 23.71 10579.43 1817.99 58083.85 00:24:06.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4476.40 17.49 14346.23 2273.65 58485.79 00:24:06.421 ======================================================== 00:24:06.421 Total : 35000.49 136.72 7327.20 1350.51 58541.73 00:24:06.421 00:24:06.421 19:39:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:24:06.421 19:39:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:06.421 19:39:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:24:06.421 19:39:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:06.421 19:39:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:24:06.421 19:39:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:06.421 19:39:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:06.421 rmmod nvme_tcp 00:24:06.421 rmmod nvme_fabrics 00:24:06.421 rmmod nvme_keyring 00:24:06.421 19:39:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:06.421 19:39:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:24:06.421 19:39:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:24:06.421 19:39:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3670157 ']' 00:24:06.421 19:39:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3670157 00:24:06.421 19:39:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3670157 ']' 00:24:06.421 19:39:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3670157 00:24:06.421 19:39:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:24:06.421 19:39:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:06.421 19:39:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3670157 00:24:06.681 19:39:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:06.681 19:39:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:06.681 19:39:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3670157' 00:24:06.681 killing process with pid 3670157 00:24:06.681 19:39:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3670157 00:24:06.681 [2024-05-15 19:39:32.630666] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:06.681 19:39:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3670157 00:24:06.681 19:39:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:06.681 19:39:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:06.681 19:39:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:06.681 19:39:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:06.681 19:39:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:06.681 19:39:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.681 19:39:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:06.681 19:39:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.046 19:39:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:10.046 19:39:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:24:10.046 00:24:10.046 real 0m55.359s 00:24:10.046 user 2m49.825s 00:24:10.046 sys 0m12.038s 00:24:10.046 19:39:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:10.046 19:39:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:10.046 ************************************ 00:24:10.046 END TEST nvmf_perf_adq 00:24:10.046 ************************************ 00:24:10.046 19:39:35 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:10.046 19:39:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:10.046 19:39:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:10.046 19:39:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:10.046 ************************************ 00:24:10.046 START TEST nvmf_shutdown 00:24:10.046 ************************************ 00:24:10.046 19:39:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:10.046 * Looking for test storage... 00:24:10.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:10.046 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:10.047 ************************************ 00:24:10.047 START TEST nvmf_shutdown_tc1 00:24:10.047 ************************************ 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:10.047 19:39:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:18.190 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:18.190 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:18.190 Found net devices under 0000:31:00.0: cvl_0_0 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:18.190 Found net devices under 0000:31:00.1: cvl_0_1 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:18.190 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:18.452 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:18.452 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:18.452 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:18.452 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:18.452 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:18.452 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:18.452 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:18.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:18.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:24:18.452 00:24:18.452 --- 10.0.0.2 ping statistics --- 00:24:18.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.452 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:24:18.452 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:18.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:18.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:24:18.452 00:24:18.452 --- 10.0.0.1 ping statistics --- 00:24:18.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.452 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:24:18.452 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:18.452 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:24:18.452 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:18.452 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:18.452 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:18.452 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:18.452 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:18.452 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:18.452 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:18.713 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:18.713 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:18.713 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:18.713 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:18.713 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3677571 00:24:18.713 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3677571 00:24:18.713 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:18.713 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3677571 ']' 00:24:18.713 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.713 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:18.713 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.713 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:18.713 19:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:18.713 [2024-05-15 19:39:44.720370] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:24:18.713 [2024-05-15 19:39:44.720434] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.713 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.713 [2024-05-15 19:39:44.801880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:18.713 [2024-05-15 19:39:44.876367] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.713 [2024-05-15 19:39:44.876410] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.713 [2024-05-15 19:39:44.876417] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.713 [2024-05-15 19:39:44.876424] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.713 [2024-05-15 19:39:44.876430] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.713 [2024-05-15 19:39:44.876546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.713 [2024-05-15 19:39:44.876702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:18.713 [2024-05-15 19:39:44.876824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.713 [2024-05-15 19:39:44.876825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:19.662 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:19.662 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:19.663 [2024-05-15 19:39:45.643237] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.663 19:39:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:19.663 Malloc1 00:24:19.663 [2024-05-15 19:39:45.746495] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:19.663 [2024-05-15 19:39:45.746738] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.663 Malloc2 00:24:19.663 Malloc3 00:24:19.663 Malloc4 00:24:19.926 Malloc5 00:24:19.926 Malloc6 00:24:19.926 Malloc7 00:24:19.926 Malloc8 00:24:19.926 Malloc9 00:24:19.926 Malloc10 00:24:19.926 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.926 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:19.926 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:19.926 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3677828 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3677828 /var/tmp/bdevperf.sock 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3677828 ']' 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:20.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:20.188 { 00:24:20.188 "params": { 00:24:20.188 "name": "Nvme$subsystem", 00:24:20.188 "trtype": "$TEST_TRANSPORT", 00:24:20.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.188 "adrfam": "ipv4", 00:24:20.188 "trsvcid": "$NVMF_PORT", 00:24:20.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.188 "hdgst": ${hdgst:-false}, 00:24:20.188 "ddgst": ${ddgst:-false} 00:24:20.188 }, 00:24:20.188 "method": "bdev_nvme_attach_controller" 00:24:20.188 } 00:24:20.188 EOF 00:24:20.188 )") 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:20.188 { 00:24:20.188 "params": { 00:24:20.188 "name": "Nvme$subsystem", 00:24:20.188 "trtype": "$TEST_TRANSPORT", 00:24:20.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.188 "adrfam": "ipv4", 00:24:20.188 "trsvcid": "$NVMF_PORT", 00:24:20.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.188 "hdgst": ${hdgst:-false}, 00:24:20.188 "ddgst": ${ddgst:-false} 00:24:20.188 }, 00:24:20.188 "method": "bdev_nvme_attach_controller" 00:24:20.188 } 00:24:20.188 EOF 00:24:20.188 )") 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:20.188 { 00:24:20.188 "params": { 00:24:20.188 "name": "Nvme$subsystem", 00:24:20.188 "trtype": "$TEST_TRANSPORT", 00:24:20.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.188 "adrfam": "ipv4", 00:24:20.188 "trsvcid": "$NVMF_PORT", 00:24:20.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.188 "hdgst": ${hdgst:-false}, 00:24:20.188 "ddgst": ${ddgst:-false} 00:24:20.188 }, 00:24:20.188 "method": "bdev_nvme_attach_controller" 00:24:20.188 } 00:24:20.188 EOF 00:24:20.188 )") 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:20.188 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:20.188 { 00:24:20.188 "params": { 00:24:20.188 "name": "Nvme$subsystem", 00:24:20.188 "trtype": "$TEST_TRANSPORT", 00:24:20.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.188 "adrfam": "ipv4", 00:24:20.188 "trsvcid": "$NVMF_PORT", 00:24:20.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.188 "hdgst": ${hdgst:-false}, 00:24:20.188 "ddgst": ${ddgst:-false} 00:24:20.189 }, 00:24:20.189 "method": "bdev_nvme_attach_controller" 00:24:20.189 } 00:24:20.189 EOF 00:24:20.189 )") 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:20.189 { 00:24:20.189 "params": { 00:24:20.189 "name": "Nvme$subsystem", 00:24:20.189 "trtype": "$TEST_TRANSPORT", 00:24:20.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.189 "adrfam": "ipv4", 00:24:20.189 "trsvcid": "$NVMF_PORT", 00:24:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.189 "hdgst": ${hdgst:-false}, 00:24:20.189 "ddgst": ${ddgst:-false} 00:24:20.189 }, 00:24:20.189 "method": "bdev_nvme_attach_controller" 00:24:20.189 } 00:24:20.189 EOF 00:24:20.189 )") 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:20.189 { 00:24:20.189 "params": { 00:24:20.189 "name": "Nvme$subsystem", 00:24:20.189 "trtype": "$TEST_TRANSPORT", 00:24:20.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.189 "adrfam": "ipv4", 00:24:20.189 "trsvcid": "$NVMF_PORT", 00:24:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.189 "hdgst": ${hdgst:-false}, 00:24:20.189 "ddgst": ${ddgst:-false} 00:24:20.189 }, 00:24:20.189 "method": "bdev_nvme_attach_controller" 00:24:20.189 } 00:24:20.189 EOF 00:24:20.189 )") 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:20.189 [2024-05-15 19:39:46.194924] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:24:20.189 [2024-05-15 19:39:46.194978] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:20.189 { 00:24:20.189 "params": { 00:24:20.189 "name": "Nvme$subsystem", 00:24:20.189 "trtype": "$TEST_TRANSPORT", 00:24:20.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.189 "adrfam": "ipv4", 00:24:20.189 "trsvcid": "$NVMF_PORT", 00:24:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.189 "hdgst": ${hdgst:-false}, 00:24:20.189 "ddgst": ${ddgst:-false} 00:24:20.189 }, 00:24:20.189 "method": "bdev_nvme_attach_controller" 00:24:20.189 } 00:24:20.189 EOF 00:24:20.189 )") 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:20.189 { 00:24:20.189 "params": { 00:24:20.189 "name": "Nvme$subsystem", 00:24:20.189 "trtype": "$TEST_TRANSPORT", 00:24:20.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.189 "adrfam": "ipv4", 00:24:20.189 "trsvcid": "$NVMF_PORT", 00:24:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.189 "hdgst": ${hdgst:-false}, 00:24:20.189 "ddgst": ${ddgst:-false} 00:24:20.189 }, 00:24:20.189 "method": "bdev_nvme_attach_controller" 00:24:20.189 } 00:24:20.189 EOF 00:24:20.189 )") 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:20.189 { 00:24:20.189 "params": { 00:24:20.189 "name": "Nvme$subsystem", 00:24:20.189 "trtype": "$TEST_TRANSPORT", 00:24:20.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.189 "adrfam": "ipv4", 00:24:20.189 "trsvcid": "$NVMF_PORT", 00:24:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.189 "hdgst": ${hdgst:-false}, 00:24:20.189 "ddgst": ${ddgst:-false} 00:24:20.189 }, 00:24:20.189 "method": "bdev_nvme_attach_controller" 00:24:20.189 } 00:24:20.189 EOF 00:24:20.189 )") 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:20.189 { 00:24:20.189 "params": { 00:24:20.189 "name": "Nvme$subsystem", 00:24:20.189 "trtype": "$TEST_TRANSPORT", 00:24:20.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.189 "adrfam": "ipv4", 00:24:20.189 "trsvcid": "$NVMF_PORT", 00:24:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.189 "hdgst": ${hdgst:-false}, 00:24:20.189 "ddgst": ${ddgst:-false} 00:24:20.189 }, 00:24:20.189 "method": "bdev_nvme_attach_controller" 00:24:20.189 } 00:24:20.189 EOF 00:24:20.189 )") 00:24:20.189 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:24:20.189 19:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:20.189 "params": { 00:24:20.189 "name": "Nvme1", 00:24:20.189 "trtype": "tcp", 00:24:20.189 "traddr": "10.0.0.2", 00:24:20.189 "adrfam": "ipv4", 00:24:20.189 "trsvcid": "4420", 00:24:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:20.189 "hdgst": false, 00:24:20.189 "ddgst": false 00:24:20.189 }, 00:24:20.189 "method": "bdev_nvme_attach_controller" 00:24:20.189 },{ 00:24:20.189 "params": { 00:24:20.189 "name": "Nvme2", 00:24:20.189 "trtype": "tcp", 00:24:20.189 "traddr": "10.0.0.2", 00:24:20.189 "adrfam": "ipv4", 00:24:20.189 "trsvcid": "4420", 00:24:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:20.189 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:20.189 "hdgst": false, 00:24:20.189 "ddgst": false 00:24:20.189 }, 00:24:20.189 "method": "bdev_nvme_attach_controller" 00:24:20.189 },{ 00:24:20.189 "params": { 00:24:20.189 "name": "Nvme3", 00:24:20.189 "trtype": "tcp", 00:24:20.189 "traddr": "10.0.0.2", 00:24:20.189 "adrfam": "ipv4", 00:24:20.189 "trsvcid": "4420", 00:24:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:20.189 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:20.189 "hdgst": false, 00:24:20.189 "ddgst": false 00:24:20.189 }, 00:24:20.189 "method": "bdev_nvme_attach_controller" 00:24:20.189 },{ 00:24:20.189 "params": { 00:24:20.189 "name": "Nvme4", 00:24:20.189 "trtype": "tcp", 00:24:20.189 "traddr": "10.0.0.2", 00:24:20.189 "adrfam": "ipv4", 00:24:20.189 "trsvcid": "4420", 00:24:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:20.189 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:20.189 "hdgst": false, 00:24:20.189 "ddgst": false 00:24:20.189 }, 00:24:20.189 "method": "bdev_nvme_attach_controller" 00:24:20.189 },{ 00:24:20.189 "params": { 00:24:20.189 "name": "Nvme5", 00:24:20.189 "trtype": "tcp", 00:24:20.189 "traddr": "10.0.0.2", 00:24:20.189 "adrfam": "ipv4", 00:24:20.189 "trsvcid": "4420", 00:24:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:20.189 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:20.189 "hdgst": false, 00:24:20.189 "ddgst": false 00:24:20.189 }, 00:24:20.189 "method": "bdev_nvme_attach_controller" 00:24:20.189 },{ 00:24:20.189 "params": { 00:24:20.189 "name": "Nvme6", 00:24:20.189 "trtype": "tcp", 00:24:20.189 "traddr": "10.0.0.2", 00:24:20.189 "adrfam": "ipv4", 00:24:20.189 "trsvcid": "4420", 00:24:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:20.189 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:20.189 "hdgst": false, 00:24:20.189 "ddgst": false 00:24:20.189 }, 00:24:20.189 "method": "bdev_nvme_attach_controller" 00:24:20.189 },{ 00:24:20.189 "params": { 00:24:20.189 "name": "Nvme7", 00:24:20.189 "trtype": "tcp", 00:24:20.189 "traddr": "10.0.0.2", 00:24:20.189 "adrfam": "ipv4", 00:24:20.189 "trsvcid": "4420", 00:24:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:20.189 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:20.189 "hdgst": false, 00:24:20.189 "ddgst": false 00:24:20.189 }, 00:24:20.189 "method": "bdev_nvme_attach_controller" 00:24:20.189 },{ 00:24:20.189 "params": { 00:24:20.189 "name": "Nvme8", 00:24:20.189 "trtype": "tcp", 00:24:20.189 "traddr": "10.0.0.2", 00:24:20.189 "adrfam": "ipv4", 00:24:20.189 "trsvcid": "4420", 00:24:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:20.189 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:20.189 "hdgst": false, 00:24:20.189 "ddgst": false 00:24:20.189 }, 00:24:20.189 "method": "bdev_nvme_attach_controller" 00:24:20.189 },{ 00:24:20.189 "params": { 00:24:20.189 "name": "Nvme9", 00:24:20.189 "trtype": "tcp", 00:24:20.189 "traddr": "10.0.0.2", 00:24:20.189 "adrfam": "ipv4", 00:24:20.189 "trsvcid": "4420", 00:24:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:20.189 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:20.189 "hdgst": false, 00:24:20.189 "ddgst": false 00:24:20.189 }, 00:24:20.189 "method": "bdev_nvme_attach_controller" 00:24:20.189 },{ 00:24:20.189 "params": { 00:24:20.189 "name": "Nvme10", 00:24:20.189 "trtype": "tcp", 00:24:20.189 "traddr": "10.0.0.2", 00:24:20.189 "adrfam": "ipv4", 00:24:20.189 "trsvcid": "4420", 00:24:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:20.189 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:20.189 "hdgst": false, 00:24:20.189 "ddgst": false 00:24:20.189 }, 00:24:20.189 "method": "bdev_nvme_attach_controller" 00:24:20.189 }' 00:24:20.189 [2024-05-15 19:39:46.280515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.189 [2024-05-15 19:39:46.346492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.104 19:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:22.104 19:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:24:22.104 19:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:22.104 19:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.104 19:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:22.104 19:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.104 19:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3677828 00:24:22.104 19:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:22.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3677828 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:22.104 19:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:24:22.676 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3677571 00:24:22.676 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:22.676 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:22.676 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:24:22.676 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:24:22.676 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:22.676 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:22.676 { 00:24:22.676 "params": { 00:24:22.676 "name": "Nvme$subsystem", 00:24:22.676 "trtype": "$TEST_TRANSPORT", 00:24:22.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.676 "adrfam": "ipv4", 00:24:22.676 "trsvcid": "$NVMF_PORT", 00:24:22.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.676 "hdgst": ${hdgst:-false}, 00:24:22.676 "ddgst": ${ddgst:-false} 00:24:22.676 }, 00:24:22.676 "method": "bdev_nvme_attach_controller" 00:24:22.676 } 00:24:22.676 EOF 00:24:22.676 )") 00:24:22.676 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:22.676 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:22.676 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:22.676 { 00:24:22.676 "params": { 00:24:22.676 "name": "Nvme$subsystem", 00:24:22.676 "trtype": "$TEST_TRANSPORT", 00:24:22.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.676 "adrfam": "ipv4", 00:24:22.676 "trsvcid": "$NVMF_PORT", 00:24:22.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.676 "hdgst": ${hdgst:-false}, 00:24:22.676 "ddgst": ${ddgst:-false} 00:24:22.676 }, 00:24:22.676 "method": "bdev_nvme_attach_controller" 00:24:22.676 } 00:24:22.676 EOF 00:24:22.676 )") 00:24:22.676 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:22.676 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:22.676 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:22.676 { 00:24:22.676 "params": { 00:24:22.676 "name": "Nvme$subsystem", 00:24:22.676 "trtype": "$TEST_TRANSPORT", 00:24:22.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.676 "adrfam": "ipv4", 00:24:22.676 "trsvcid": "$NVMF_PORT", 00:24:22.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.676 "hdgst": ${hdgst:-false}, 00:24:22.676 "ddgst": ${ddgst:-false} 00:24:22.676 }, 00:24:22.676 "method": "bdev_nvme_attach_controller" 00:24:22.676 } 00:24:22.676 EOF 00:24:22.676 )") 00:24:22.676 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:22.676 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:22.676 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:22.676 { 00:24:22.676 "params": { 00:24:22.676 "name": "Nvme$subsystem", 00:24:22.676 "trtype": "$TEST_TRANSPORT", 00:24:22.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.677 "adrfam": "ipv4", 00:24:22.677 "trsvcid": "$NVMF_PORT", 00:24:22.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.677 "hdgst": ${hdgst:-false}, 00:24:22.677 "ddgst": ${ddgst:-false} 00:24:22.677 }, 00:24:22.677 "method": "bdev_nvme_attach_controller" 00:24:22.677 } 00:24:22.677 EOF 00:24:22.677 )") 00:24:22.677 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:22.677 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:22.677 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:22.677 { 00:24:22.677 "params": { 00:24:22.677 "name": "Nvme$subsystem", 00:24:22.677 "trtype": "$TEST_TRANSPORT", 00:24:22.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.677 "adrfam": "ipv4", 00:24:22.677 "trsvcid": "$NVMF_PORT", 00:24:22.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.677 "hdgst": ${hdgst:-false}, 00:24:22.677 "ddgst": ${ddgst:-false} 00:24:22.677 }, 00:24:22.677 "method": "bdev_nvme_attach_controller" 00:24:22.677 } 00:24:22.677 EOF 00:24:22.677 )") 00:24:22.677 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:22.677 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:22.677 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:22.677 { 00:24:22.677 "params": { 00:24:22.677 "name": "Nvme$subsystem", 00:24:22.677 "trtype": "$TEST_TRANSPORT", 00:24:22.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.677 "adrfam": "ipv4", 00:24:22.677 "trsvcid": "$NVMF_PORT", 00:24:22.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.677 "hdgst": ${hdgst:-false}, 00:24:22.677 "ddgst": ${ddgst:-false} 00:24:22.677 }, 00:24:22.677 "method": "bdev_nvme_attach_controller" 00:24:22.677 } 00:24:22.677 EOF 00:24:22.677 )") 00:24:22.677 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:22.677 [2024-05-15 19:39:48.835645] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:24:22.677 [2024-05-15 19:39:48.835697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3678397 ] 00:24:22.677 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:22.677 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:22.677 { 00:24:22.677 "params": { 00:24:22.677 "name": "Nvme$subsystem", 00:24:22.677 "trtype": "$TEST_TRANSPORT", 00:24:22.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.677 "adrfam": "ipv4", 00:24:22.677 "trsvcid": "$NVMF_PORT", 00:24:22.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.677 "hdgst": ${hdgst:-false}, 00:24:22.677 "ddgst": ${ddgst:-false} 00:24:22.677 }, 00:24:22.677 "method": "bdev_nvme_attach_controller" 00:24:22.677 } 00:24:22.677 EOF 00:24:22.677 )") 00:24:22.677 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:22.677 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:22.677 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:22.677 { 00:24:22.677 "params": { 00:24:22.677 "name": "Nvme$subsystem", 00:24:22.677 "trtype": "$TEST_TRANSPORT", 00:24:22.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.677 "adrfam": "ipv4", 00:24:22.677 "trsvcid": "$NVMF_PORT", 00:24:22.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.677 "hdgst": ${hdgst:-false}, 00:24:22.677 "ddgst": ${ddgst:-false} 00:24:22.677 }, 00:24:22.677 "method": "bdev_nvme_attach_controller" 00:24:22.677 } 00:24:22.677 EOF 00:24:22.677 )") 00:24:22.677 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:22.677 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:22.677 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:22.677 { 00:24:22.677 "params": { 00:24:22.677 "name": "Nvme$subsystem", 00:24:22.677 "trtype": "$TEST_TRANSPORT", 00:24:22.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.677 "adrfam": "ipv4", 00:24:22.677 "trsvcid": "$NVMF_PORT", 00:24:22.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.677 "hdgst": ${hdgst:-false}, 00:24:22.677 "ddgst": ${ddgst:-false} 00:24:22.677 }, 00:24:22.677 "method": "bdev_nvme_attach_controller" 00:24:22.677 } 00:24:22.677 EOF 00:24:22.677 )") 00:24:22.677 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:22.677 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:22.677 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:22.677 { 00:24:22.677 "params": { 00:24:22.677 "name": "Nvme$subsystem", 00:24:22.677 "trtype": "$TEST_TRANSPORT", 00:24:22.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:22.677 "adrfam": "ipv4", 00:24:22.677 "trsvcid": "$NVMF_PORT", 00:24:22.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:22.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:22.677 "hdgst": ${hdgst:-false}, 00:24:22.677 "ddgst": ${ddgst:-false} 00:24:22.677 }, 00:24:22.677 "method": "bdev_nvme_attach_controller" 00:24:22.677 } 00:24:22.677 EOF 00:24:22.677 )") 00:24:22.938 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:22.938 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.938 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:24:22.938 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:24:22.938 19:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:22.938 "params": { 00:24:22.938 "name": "Nvme1", 00:24:22.938 "trtype": "tcp", 00:24:22.938 "traddr": "10.0.0.2", 00:24:22.938 "adrfam": "ipv4", 00:24:22.938 "trsvcid": "4420", 00:24:22.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:22.938 "hdgst": false, 00:24:22.938 "ddgst": false 00:24:22.938 }, 00:24:22.938 "method": "bdev_nvme_attach_controller" 00:24:22.938 },{ 00:24:22.938 "params": { 00:24:22.938 "name": "Nvme2", 00:24:22.938 "trtype": "tcp", 00:24:22.938 "traddr": "10.0.0.2", 00:24:22.938 "adrfam": "ipv4", 00:24:22.938 "trsvcid": "4420", 00:24:22.938 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:22.938 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:22.938 "hdgst": false, 00:24:22.938 "ddgst": false 00:24:22.938 }, 00:24:22.938 "method": "bdev_nvme_attach_controller" 00:24:22.938 },{ 00:24:22.938 "params": { 00:24:22.938 "name": "Nvme3", 00:24:22.938 "trtype": "tcp", 00:24:22.938 "traddr": "10.0.0.2", 00:24:22.938 "adrfam": "ipv4", 00:24:22.938 "trsvcid": "4420", 00:24:22.938 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:22.938 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:22.938 "hdgst": false, 00:24:22.938 "ddgst": false 00:24:22.938 }, 00:24:22.938 "method": "bdev_nvme_attach_controller" 00:24:22.938 },{ 00:24:22.938 "params": { 00:24:22.938 "name": "Nvme4", 00:24:22.938 "trtype": "tcp", 00:24:22.938 "traddr": "10.0.0.2", 00:24:22.938 "adrfam": "ipv4", 00:24:22.938 "trsvcid": "4420", 00:24:22.938 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:22.938 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:22.938 "hdgst": false, 00:24:22.938 "ddgst": false 00:24:22.938 }, 00:24:22.938 "method": "bdev_nvme_attach_controller" 00:24:22.938 },{ 00:24:22.938 "params": { 00:24:22.938 "name": "Nvme5", 00:24:22.938 "trtype": "tcp", 00:24:22.938 "traddr": "10.0.0.2", 00:24:22.938 "adrfam": "ipv4", 00:24:22.938 "trsvcid": "4420", 00:24:22.938 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:22.938 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:22.938 "hdgst": false, 00:24:22.938 "ddgst": false 00:24:22.938 }, 00:24:22.938 "method": "bdev_nvme_attach_controller" 00:24:22.938 },{ 00:24:22.938 "params": { 00:24:22.938 "name": "Nvme6", 00:24:22.938 "trtype": "tcp", 00:24:22.938 "traddr": "10.0.0.2", 00:24:22.938 "adrfam": "ipv4", 00:24:22.938 "trsvcid": "4420", 00:24:22.938 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:22.938 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:22.938 "hdgst": false, 00:24:22.938 "ddgst": false 00:24:22.938 }, 00:24:22.938 "method": "bdev_nvme_attach_controller" 00:24:22.938 },{ 00:24:22.938 "params": { 00:24:22.938 "name": "Nvme7", 00:24:22.938 "trtype": "tcp", 00:24:22.938 "traddr": "10.0.0.2", 00:24:22.938 "adrfam": "ipv4", 00:24:22.938 "trsvcid": "4420", 00:24:22.938 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:22.938 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:22.938 "hdgst": false, 00:24:22.938 "ddgst": false 00:24:22.938 }, 00:24:22.938 "method": "bdev_nvme_attach_controller" 00:24:22.938 },{ 00:24:22.938 "params": { 00:24:22.938 "name": "Nvme8", 00:24:22.938 "trtype": "tcp", 00:24:22.938 "traddr": "10.0.0.2", 00:24:22.938 "adrfam": "ipv4", 00:24:22.938 "trsvcid": "4420", 00:24:22.938 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:22.938 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:22.938 "hdgst": false, 00:24:22.938 "ddgst": false 00:24:22.939 }, 00:24:22.939 "method": "bdev_nvme_attach_controller" 00:24:22.939 },{ 00:24:22.939 "params": { 00:24:22.939 "name": "Nvme9", 00:24:22.939 "trtype": "tcp", 00:24:22.939 "traddr": "10.0.0.2", 00:24:22.939 "adrfam": "ipv4", 00:24:22.939 "trsvcid": "4420", 00:24:22.939 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:22.939 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:22.939 "hdgst": false, 00:24:22.939 "ddgst": false 00:24:22.939 }, 00:24:22.939 "method": "bdev_nvme_attach_controller" 00:24:22.939 },{ 00:24:22.939 "params": { 00:24:22.939 "name": "Nvme10", 00:24:22.939 "trtype": "tcp", 00:24:22.939 "traddr": "10.0.0.2", 00:24:22.939 "adrfam": "ipv4", 00:24:22.939 "trsvcid": "4420", 00:24:22.939 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:22.939 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:22.939 "hdgst": false, 00:24:22.939 "ddgst": false 00:24:22.939 }, 00:24:22.939 "method": "bdev_nvme_attach_controller" 00:24:22.939 }' 00:24:22.939 [2024-05-15 19:39:48.921782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.939 [2024-05-15 19:39:48.986274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.325 Running I/O for 1 seconds... 00:24:25.269 00:24:25.269 Latency(us) 00:24:25.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.269 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:25.269 Verification LBA range: start 0x0 length 0x400 00:24:25.269 Nvme1n1 : 1.13 226.56 14.16 0.00 0.00 279656.96 21626.88 265639.25 00:24:25.269 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:25.269 Verification LBA range: start 0x0 length 0x400 00:24:25.269 Nvme2n1 : 1.14 224.87 14.05 0.00 0.00 276862.51 21626.88 244667.73 00:24:25.269 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:25.269 Verification LBA range: start 0x0 length 0x400 00:24:25.269 Nvme3n1 : 1.17 272.42 17.03 0.00 0.00 223409.83 25668.27 225443.84 00:24:25.269 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:25.269 Verification LBA range: start 0x0 length 0x400 00:24:25.269 Nvme4n1 : 1.07 246.66 15.42 0.00 0.00 240662.15 6498.99 242920.11 00:24:25.269 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:25.269 Verification LBA range: start 0x0 length 0x400 00:24:25.269 Nvme5n1 : 1.10 232.42 14.53 0.00 0.00 253439.57 20425.39 249910.61 00:24:25.269 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:25.269 Verification LBA range: start 0x0 length 0x400 00:24:25.269 Nvme6n1 : 1.17 219.16 13.70 0.00 0.00 265350.61 23046.83 269134.51 00:24:25.269 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:25.269 Verification LBA range: start 0x0 length 0x400 00:24:25.269 Nvme7n1 : 1.18 270.10 16.88 0.00 0.00 211797.67 19660.80 244667.73 00:24:25.269 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:25.269 Verification LBA range: start 0x0 length 0x400 00:24:25.269 Nvme8n1 : 1.19 269.02 16.81 0.00 0.00 208923.99 19551.57 246415.36 00:24:25.269 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:25.269 Verification LBA range: start 0x0 length 0x400 00:24:25.269 Nvme9n1 : 1.20 266.93 16.68 0.00 0.00 207114.07 16274.77 248162.99 00:24:25.269 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:25.269 Verification LBA range: start 0x0 length 0x400 00:24:25.269 Nvme10n1 : 1.18 221.76 13.86 0.00 0.00 243481.77 1208.32 274377.39 00:24:25.269 =================================================================================================================== 00:24:25.269 Total : 2449.90 153.12 0.00 0.00 238518.38 1208.32 274377.39 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:25.531 rmmod nvme_tcp 00:24:25.531 rmmod nvme_fabrics 00:24:25.531 rmmod nvme_keyring 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3677571 ']' 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3677571 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 3677571 ']' 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 3677571 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3677571 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3677571' 00:24:25.531 killing process with pid 3677571 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 3677571 00:24:25.531 [2024-05-15 19:39:51.645591] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:25.531 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 3677571 00:24:25.792 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:25.792 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:25.792 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:25.792 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:25.792 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:25.792 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.792 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.792 19:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.343 19:39:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:28.343 00:24:28.343 real 0m17.853s 00:24:28.343 user 0m34.428s 00:24:28.343 sys 0m7.592s 00:24:28.343 19:39:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:28.343 19:39:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:28.343 ************************************ 00:24:28.343 END TEST nvmf_shutdown_tc1 00:24:28.343 ************************************ 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:28.343 ************************************ 00:24:28.343 START TEST nvmf_shutdown_tc2 00:24:28.343 ************************************ 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:28.343 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:28.343 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:28.343 Found net devices under 0000:31:00.0: cvl_0_0 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:28.343 Found net devices under 0000:31:00.1: cvl_0_1 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:28.343 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:28.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:24:28.344 00:24:28.344 --- 10.0.0.2 ping statistics --- 00:24:28.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.344 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:24:28.344 00:24:28.344 --- 10.0.0.1 ping statistics --- 00:24:28.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.344 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3679518 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3679518 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3679518 ']' 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:28.344 19:39:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:28.605 [2024-05-15 19:39:54.545892] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:24:28.605 [2024-05-15 19:39:54.545956] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.605 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.605 [2024-05-15 19:39:54.623861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:28.605 [2024-05-15 19:39:54.697610] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.605 [2024-05-15 19:39:54.697646] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.605 [2024-05-15 19:39:54.697654] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.605 [2024-05-15 19:39:54.697660] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.605 [2024-05-15 19:39:54.697666] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.605 [2024-05-15 19:39:54.697775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.605 [2024-05-15 19:39:54.697960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.605 [2024-05-15 19:39:54.698117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.605 [2024-05-15 19:39:54.698118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.550 [2024-05-15 19:39:55.466166] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:29.550 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.551 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:29.551 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.551 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:29.551 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:29.551 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.551 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.551 Malloc1 00:24:29.551 [2024-05-15 19:39:55.569358] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:29.551 [2024-05-15 19:39:55.569575] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.551 Malloc2 00:24:29.551 Malloc3 00:24:29.551 Malloc4 00:24:29.551 Malloc5 00:24:29.812 Malloc6 00:24:29.812 Malloc7 00:24:29.812 Malloc8 00:24:29.812 Malloc9 00:24:29.812 Malloc10 00:24:29.812 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.812 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:29.812 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:29.812 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.812 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3679903 00:24:29.812 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3679903 /var/tmp/bdevperf.sock 00:24:29.812 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3679903 ']' 00:24:29.812 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:29.812 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:29.812 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:29.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:29.812 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:29.812 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:29.812 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:29.812 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.812 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:24:29.812 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:24:29.813 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:29.813 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:29.813 { 00:24:29.813 "params": { 00:24:29.813 "name": "Nvme$subsystem", 00:24:29.813 "trtype": "$TEST_TRANSPORT", 00:24:29.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.813 "adrfam": "ipv4", 00:24:29.813 "trsvcid": "$NVMF_PORT", 00:24:29.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.813 "hdgst": ${hdgst:-false}, 00:24:29.813 "ddgst": ${ddgst:-false} 00:24:29.813 }, 00:24:29.813 "method": "bdev_nvme_attach_controller" 00:24:29.813 } 00:24:29.813 EOF 00:24:29.813 )") 00:24:29.813 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:29.813 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:29.813 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:29.813 { 00:24:29.813 "params": { 00:24:29.813 "name": "Nvme$subsystem", 00:24:29.813 "trtype": "$TEST_TRANSPORT", 00:24:29.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.813 "adrfam": "ipv4", 00:24:29.813 "trsvcid": "$NVMF_PORT", 00:24:29.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.813 "hdgst": ${hdgst:-false}, 00:24:29.813 "ddgst": ${ddgst:-false} 00:24:29.813 }, 00:24:29.813 "method": "bdev_nvme_attach_controller" 00:24:29.813 } 00:24:29.813 EOF 00:24:29.813 )") 00:24:29.813 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:29.813 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:29.813 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:29.813 { 00:24:29.813 "params": { 00:24:29.813 "name": "Nvme$subsystem", 00:24:29.813 "trtype": "$TEST_TRANSPORT", 00:24:29.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.813 "adrfam": "ipv4", 00:24:29.813 "trsvcid": "$NVMF_PORT", 00:24:29.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.813 "hdgst": ${hdgst:-false}, 00:24:29.813 "ddgst": ${ddgst:-false} 00:24:29.813 }, 00:24:29.813 "method": "bdev_nvme_attach_controller" 00:24:29.813 } 00:24:29.813 EOF 00:24:29.813 )") 00:24:30.074 19:39:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:30.074 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:30.074 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:30.074 { 00:24:30.074 "params": { 00:24:30.074 "name": "Nvme$subsystem", 00:24:30.074 "trtype": "$TEST_TRANSPORT", 00:24:30.074 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:30.074 "adrfam": "ipv4", 00:24:30.074 "trsvcid": "$NVMF_PORT", 00:24:30.074 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:30.074 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:30.074 "hdgst": ${hdgst:-false}, 00:24:30.074 "ddgst": ${ddgst:-false} 00:24:30.074 }, 00:24:30.074 "method": "bdev_nvme_attach_controller" 00:24:30.074 } 00:24:30.074 EOF 00:24:30.074 )") 00:24:30.074 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:30.074 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:30.075 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:30.075 { 00:24:30.075 "params": { 00:24:30.075 "name": "Nvme$subsystem", 00:24:30.075 "trtype": "$TEST_TRANSPORT", 00:24:30.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:30.075 "adrfam": "ipv4", 00:24:30.075 "trsvcid": "$NVMF_PORT", 00:24:30.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:30.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:30.075 "hdgst": ${hdgst:-false}, 00:24:30.075 "ddgst": ${ddgst:-false} 00:24:30.075 }, 00:24:30.075 "method": "bdev_nvme_attach_controller" 00:24:30.075 } 00:24:30.075 EOF 00:24:30.075 )") 00:24:30.075 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:30.075 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:30.075 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:30.075 { 00:24:30.075 "params": { 00:24:30.075 "name": "Nvme$subsystem", 00:24:30.075 "trtype": "$TEST_TRANSPORT", 00:24:30.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:30.075 "adrfam": "ipv4", 00:24:30.075 "trsvcid": "$NVMF_PORT", 00:24:30.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:30.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:30.075 "hdgst": ${hdgst:-false}, 00:24:30.075 "ddgst": ${ddgst:-false} 00:24:30.075 }, 00:24:30.075 "method": "bdev_nvme_attach_controller" 00:24:30.075 } 00:24:30.075 EOF 00:24:30.075 )") 00:24:30.075 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:30.075 [2024-05-15 19:39:56.024302] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:24:30.075 [2024-05-15 19:39:56.024359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3679903 ] 00:24:30.075 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:30.075 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:30.075 { 00:24:30.075 "params": { 00:24:30.075 "name": "Nvme$subsystem", 00:24:30.075 "trtype": "$TEST_TRANSPORT", 00:24:30.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:30.075 "adrfam": "ipv4", 00:24:30.075 "trsvcid": "$NVMF_PORT", 00:24:30.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:30.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:30.075 "hdgst": ${hdgst:-false}, 00:24:30.075 "ddgst": ${ddgst:-false} 00:24:30.075 }, 00:24:30.075 "method": "bdev_nvme_attach_controller" 00:24:30.075 } 00:24:30.075 EOF 00:24:30.075 )") 00:24:30.075 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:30.075 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:30.075 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:30.075 { 00:24:30.075 "params": { 00:24:30.075 "name": "Nvme$subsystem", 00:24:30.075 "trtype": "$TEST_TRANSPORT", 00:24:30.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:30.075 "adrfam": "ipv4", 00:24:30.075 "trsvcid": "$NVMF_PORT", 00:24:30.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:30.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:30.075 "hdgst": ${hdgst:-false}, 00:24:30.075 "ddgst": ${ddgst:-false} 00:24:30.075 }, 00:24:30.075 "method": "bdev_nvme_attach_controller" 00:24:30.075 } 00:24:30.075 EOF 00:24:30.075 )") 00:24:30.075 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:30.075 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:30.075 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:30.075 { 00:24:30.075 "params": { 00:24:30.075 "name": "Nvme$subsystem", 00:24:30.075 "trtype": "$TEST_TRANSPORT", 00:24:30.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:30.075 "adrfam": "ipv4", 00:24:30.075 "trsvcid": "$NVMF_PORT", 00:24:30.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:30.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:30.075 "hdgst": ${hdgst:-false}, 00:24:30.075 "ddgst": ${ddgst:-false} 00:24:30.075 }, 00:24:30.075 "method": "bdev_nvme_attach_controller" 00:24:30.075 } 00:24:30.075 EOF 00:24:30.075 )") 00:24:30.075 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:30.075 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:30.075 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:30.075 { 00:24:30.075 "params": { 00:24:30.075 "name": "Nvme$subsystem", 00:24:30.075 "trtype": "$TEST_TRANSPORT", 00:24:30.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:30.075 "adrfam": "ipv4", 00:24:30.075 "trsvcid": "$NVMF_PORT", 00:24:30.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:30.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:30.075 "hdgst": ${hdgst:-false}, 00:24:30.075 "ddgst": ${ddgst:-false} 00:24:30.075 }, 00:24:30.075 "method": "bdev_nvme_attach_controller" 00:24:30.075 } 00:24:30.075 EOF 00:24:30.075 )") 00:24:30.075 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:30.075 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.075 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:24:30.075 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:24:30.075 19:39:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:30.075 "params": { 00:24:30.075 "name": "Nvme1", 00:24:30.075 "trtype": "tcp", 00:24:30.075 "traddr": "10.0.0.2", 00:24:30.075 "adrfam": "ipv4", 00:24:30.075 "trsvcid": "4420", 00:24:30.075 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:30.075 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:30.075 "hdgst": false, 00:24:30.075 "ddgst": false 00:24:30.075 }, 00:24:30.075 "method": "bdev_nvme_attach_controller" 00:24:30.075 },{ 00:24:30.075 "params": { 00:24:30.075 "name": "Nvme2", 00:24:30.075 "trtype": "tcp", 00:24:30.075 "traddr": "10.0.0.2", 00:24:30.075 "adrfam": "ipv4", 00:24:30.075 "trsvcid": "4420", 00:24:30.075 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:30.075 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:30.075 "hdgst": false, 00:24:30.075 "ddgst": false 00:24:30.075 }, 00:24:30.075 "method": "bdev_nvme_attach_controller" 00:24:30.075 },{ 00:24:30.075 "params": { 00:24:30.075 "name": "Nvme3", 00:24:30.075 "trtype": "tcp", 00:24:30.075 "traddr": "10.0.0.2", 00:24:30.075 "adrfam": "ipv4", 00:24:30.075 "trsvcid": "4420", 00:24:30.075 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:30.075 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:30.075 "hdgst": false, 00:24:30.075 "ddgst": false 00:24:30.075 }, 00:24:30.075 "method": "bdev_nvme_attach_controller" 00:24:30.075 },{ 00:24:30.075 "params": { 00:24:30.075 "name": "Nvme4", 00:24:30.075 "trtype": "tcp", 00:24:30.075 "traddr": "10.0.0.2", 00:24:30.075 "adrfam": "ipv4", 00:24:30.075 "trsvcid": "4420", 00:24:30.075 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:30.075 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:30.075 "hdgst": false, 00:24:30.075 "ddgst": false 00:24:30.075 }, 00:24:30.075 "method": "bdev_nvme_attach_controller" 00:24:30.075 },{ 00:24:30.075 "params": { 00:24:30.075 "name": "Nvme5", 00:24:30.075 "trtype": "tcp", 00:24:30.075 "traddr": "10.0.0.2", 00:24:30.075 "adrfam": "ipv4", 00:24:30.075 "trsvcid": "4420", 00:24:30.075 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:30.075 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:30.075 "hdgst": false, 00:24:30.075 "ddgst": false 00:24:30.075 }, 00:24:30.075 "method": "bdev_nvme_attach_controller" 00:24:30.075 },{ 00:24:30.075 "params": { 00:24:30.075 "name": "Nvme6", 00:24:30.075 "trtype": "tcp", 00:24:30.075 "traddr": "10.0.0.2", 00:24:30.075 "adrfam": "ipv4", 00:24:30.075 "trsvcid": "4420", 00:24:30.075 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:30.075 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:30.075 "hdgst": false, 00:24:30.075 "ddgst": false 00:24:30.075 }, 00:24:30.075 "method": "bdev_nvme_attach_controller" 00:24:30.075 },{ 00:24:30.075 "params": { 00:24:30.075 "name": "Nvme7", 00:24:30.075 "trtype": "tcp", 00:24:30.075 "traddr": "10.0.0.2", 00:24:30.075 "adrfam": "ipv4", 00:24:30.075 "trsvcid": "4420", 00:24:30.075 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:30.075 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:30.075 "hdgst": false, 00:24:30.075 "ddgst": false 00:24:30.075 }, 00:24:30.075 "method": "bdev_nvme_attach_controller" 00:24:30.075 },{ 00:24:30.075 "params": { 00:24:30.075 "name": "Nvme8", 00:24:30.075 "trtype": "tcp", 00:24:30.075 "traddr": "10.0.0.2", 00:24:30.075 "adrfam": "ipv4", 00:24:30.075 "trsvcid": "4420", 00:24:30.075 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:30.075 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:30.075 "hdgst": false, 00:24:30.075 "ddgst": false 00:24:30.075 }, 00:24:30.075 "method": "bdev_nvme_attach_controller" 00:24:30.075 },{ 00:24:30.075 "params": { 00:24:30.075 "name": "Nvme9", 00:24:30.075 "trtype": "tcp", 00:24:30.075 "traddr": "10.0.0.2", 00:24:30.075 "adrfam": "ipv4", 00:24:30.075 "trsvcid": "4420", 00:24:30.075 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:30.075 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:30.075 "hdgst": false, 00:24:30.075 "ddgst": false 00:24:30.075 }, 00:24:30.075 "method": "bdev_nvme_attach_controller" 00:24:30.075 },{ 00:24:30.075 "params": { 00:24:30.075 "name": "Nvme10", 00:24:30.075 "trtype": "tcp", 00:24:30.075 "traddr": "10.0.0.2", 00:24:30.076 "adrfam": "ipv4", 00:24:30.076 "trsvcid": "4420", 00:24:30.076 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:30.076 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:30.076 "hdgst": false, 00:24:30.076 "ddgst": false 00:24:30.076 }, 00:24:30.076 "method": "bdev_nvme_attach_controller" 00:24:30.076 }' 00:24:30.076 [2024-05-15 19:39:56.106172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.076 [2024-05-15 19:39:56.170961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.462 Running I/O for 10 seconds... 00:24:31.462 19:39:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:31.462 19:39:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:24:31.462 19:39:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:31.462 19:39:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.462 19:39:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:31.723 19:39:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.723 19:39:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:31.723 19:39:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:31.723 19:39:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:31.723 19:39:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:24:31.723 19:39:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:24:31.723 19:39:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:31.723 19:39:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:31.723 19:39:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:31.723 19:39:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:31.723 19:39:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.723 19:39:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:31.723 19:39:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.723 19:39:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:31.723 19:39:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:31.723 19:39:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:31.984 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:31.984 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:31.984 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:31.984 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:31.984 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.984 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:31.984 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.984 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:31.984 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:31.984 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:32.244 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:32.244 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:32.244 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:32.244 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:32.244 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.244 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:32.244 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.244 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:24:32.244 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:24:32.244 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:24:32.244 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:24:32.244 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:24:32.244 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3679903 00:24:32.244 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3679903 ']' 00:24:32.244 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3679903 00:24:32.244 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:24:32.244 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:32.245 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3679903 00:24:32.505 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:32.505 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:32.505 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3679903' 00:24:32.505 killing process with pid 3679903 00:24:32.505 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3679903 00:24:32.505 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3679903 00:24:32.505 Received shutdown signal, test time was about 1.004105 seconds 00:24:32.505 00:24:32.505 Latency(us) 00:24:32.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.506 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:32.506 Verification LBA range: start 0x0 length 0x400 00:24:32.506 Nvme1n1 : 1.00 255.18 15.95 0.00 0.00 238068.91 18131.63 242920.11 00:24:32.506 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:32.506 Verification LBA range: start 0x0 length 0x400 00:24:32.506 Nvme2n1 : 0.95 268.36 16.77 0.00 0.00 230658.99 18677.76 239424.85 00:24:32.506 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:32.506 Verification LBA range: start 0x0 length 0x400 00:24:32.506 Nvme3n1 : 0.94 275.16 17.20 0.00 0.00 218583.36 7700.48 235929.60 00:24:32.506 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:32.506 Verification LBA range: start 0x0 length 0x400 00:24:32.506 Nvme4n1 : 0.97 262.64 16.42 0.00 0.00 226044.80 22063.79 230686.72 00:24:32.506 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:32.506 Verification LBA range: start 0x0 length 0x400 00:24:32.506 Nvme5n1 : 0.92 207.90 12.99 0.00 0.00 278835.77 20097.71 246415.36 00:24:32.506 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:32.506 Verification LBA range: start 0x0 length 0x400 00:24:32.506 Nvme6n1 : 0.96 266.95 16.68 0.00 0.00 213141.97 20862.29 253405.87 00:24:32.506 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:32.506 Verification LBA range: start 0x0 length 0x400 00:24:32.506 Nvme7n1 : 0.97 263.08 16.44 0.00 0.00 211374.59 22719.15 242920.11 00:24:32.506 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:32.506 Verification LBA range: start 0x0 length 0x400 00:24:32.506 Nvme8n1 : 0.94 204.22 12.76 0.00 0.00 265205.19 21626.88 242920.11 00:24:32.506 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:32.506 Verification LBA range: start 0x0 length 0x400 00:24:32.506 Nvme9n1 : 0.94 203.65 12.73 0.00 0.00 259799.32 22282.24 244667.73 00:24:32.506 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:32.506 Verification LBA range: start 0x0 length 0x400 00:24:32.506 Nvme10n1 : 0.95 201.67 12.60 0.00 0.00 256780.80 21845.33 269134.51 00:24:32.506 =================================================================================================================== 00:24:32.506 Total : 2408.83 150.55 0.00 0.00 237026.25 7700.48 269134.51 00:24:32.767 19:39:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:24:33.710 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3679518 00:24:33.710 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:24:33.710 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:33.710 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:33.710 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:33.711 rmmod nvme_tcp 00:24:33.711 rmmod nvme_fabrics 00:24:33.711 rmmod nvme_keyring 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3679518 ']' 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3679518 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3679518 ']' 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3679518 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3679518 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3679518' 00:24:33.711 killing process with pid 3679518 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3679518 00:24:33.711 [2024-05-15 19:39:59.868383] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:33.711 19:39:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3679518 00:24:33.972 19:40:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:33.972 19:40:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:33.972 19:40:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:33.972 19:40:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:33.972 19:40:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:33.972 19:40:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.972 19:40:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:33.972 19:40:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:36.520 00:24:36.520 real 0m8.131s 00:24:36.520 user 0m24.481s 00:24:36.520 sys 0m1.397s 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:36.520 ************************************ 00:24:36.520 END TEST nvmf_shutdown_tc2 00:24:36.520 ************************************ 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:36.520 ************************************ 00:24:36.520 START TEST nvmf_shutdown_tc3 00:24:36.520 ************************************ 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:36.520 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:36.520 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:36.520 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:36.521 Found net devices under 0000:31:00.0: cvl_0_0 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:36.521 Found net devices under 0000:31:00.1: cvl_0_1 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:36.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:36.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:24:36.521 00:24:36.521 --- 10.0.0.2 ping statistics --- 00:24:36.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.521 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:36.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:36.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:24:36.521 00:24:36.521 --- 10.0.0.1 ping statistics --- 00:24:36.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.521 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3681365 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3681365 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3681365 ']' 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:36.521 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:36.782 [2024-05-15 19:40:02.725953] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:24:36.782 [2024-05-15 19:40:02.725993] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.782 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.782 [2024-05-15 19:40:02.787747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:36.782 [2024-05-15 19:40:02.851975] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.782 [2024-05-15 19:40:02.852011] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.782 [2024-05-15 19:40:02.852018] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.782 [2024-05-15 19:40:02.852024] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.782 [2024-05-15 19:40:02.852030] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.782 [2024-05-15 19:40:02.852131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.782 [2024-05-15 19:40:02.852284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:36.782 [2024-05-15 19:40:02.852442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:36.782 [2024-05-15 19:40:02.852569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.782 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:36.782 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:24:36.782 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:36.782 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:36.782 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:37.042 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.042 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:37.042 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.042 19:40:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:37.042 [2024-05-15 19:40:02.995144] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.042 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:37.042 Malloc1 00:24:37.043 [2024-05-15 19:40:03.098370] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:37.043 [2024-05-15 19:40:03.098603] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.043 Malloc2 00:24:37.043 Malloc3 00:24:37.043 Malloc4 00:24:37.303 Malloc5 00:24:37.303 Malloc6 00:24:37.303 Malloc7 00:24:37.303 Malloc8 00:24:37.303 Malloc9 00:24:37.303 Malloc10 00:24:37.303 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.303 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:37.303 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:37.303 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:37.566 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3681465 00:24:37.566 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3681465 /var/tmp/bdevperf.sock 00:24:37.566 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3681465 ']' 00:24:37.566 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:37.566 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:37.566 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:37.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:37.566 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:37.566 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:37.566 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:37.566 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:37.566 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:24:37.566 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:24:37.566 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:37.566 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:37.566 { 00:24:37.566 "params": { 00:24:37.566 "name": "Nvme$subsystem", 00:24:37.566 "trtype": "$TEST_TRANSPORT", 00:24:37.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.566 "adrfam": "ipv4", 00:24:37.566 "trsvcid": "$NVMF_PORT", 00:24:37.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.566 "hdgst": ${hdgst:-false}, 00:24:37.566 "ddgst": ${ddgst:-false} 00:24:37.566 }, 00:24:37.566 "method": "bdev_nvme_attach_controller" 00:24:37.566 } 00:24:37.566 EOF 00:24:37.567 )") 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:37.567 { 00:24:37.567 "params": { 00:24:37.567 "name": "Nvme$subsystem", 00:24:37.567 "trtype": "$TEST_TRANSPORT", 00:24:37.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.567 "adrfam": "ipv4", 00:24:37.567 "trsvcid": "$NVMF_PORT", 00:24:37.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.567 "hdgst": ${hdgst:-false}, 00:24:37.567 "ddgst": ${ddgst:-false} 00:24:37.567 }, 00:24:37.567 "method": "bdev_nvme_attach_controller" 00:24:37.567 } 00:24:37.567 EOF 00:24:37.567 )") 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:37.567 { 00:24:37.567 "params": { 00:24:37.567 "name": "Nvme$subsystem", 00:24:37.567 "trtype": "$TEST_TRANSPORT", 00:24:37.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.567 "adrfam": "ipv4", 00:24:37.567 "trsvcid": "$NVMF_PORT", 00:24:37.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.567 "hdgst": ${hdgst:-false}, 00:24:37.567 "ddgst": ${ddgst:-false} 00:24:37.567 }, 00:24:37.567 "method": "bdev_nvme_attach_controller" 00:24:37.567 } 00:24:37.567 EOF 00:24:37.567 )") 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:37.567 { 00:24:37.567 "params": { 00:24:37.567 "name": "Nvme$subsystem", 00:24:37.567 "trtype": "$TEST_TRANSPORT", 00:24:37.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.567 "adrfam": "ipv4", 00:24:37.567 "trsvcid": "$NVMF_PORT", 00:24:37.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.567 "hdgst": ${hdgst:-false}, 00:24:37.567 "ddgst": ${ddgst:-false} 00:24:37.567 }, 00:24:37.567 "method": "bdev_nvme_attach_controller" 00:24:37.567 } 00:24:37.567 EOF 00:24:37.567 )") 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:37.567 { 00:24:37.567 "params": { 00:24:37.567 "name": "Nvme$subsystem", 00:24:37.567 "trtype": "$TEST_TRANSPORT", 00:24:37.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.567 "adrfam": "ipv4", 00:24:37.567 "trsvcid": "$NVMF_PORT", 00:24:37.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.567 "hdgst": ${hdgst:-false}, 00:24:37.567 "ddgst": ${ddgst:-false} 00:24:37.567 }, 00:24:37.567 "method": "bdev_nvme_attach_controller" 00:24:37.567 } 00:24:37.567 EOF 00:24:37.567 )") 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:37.567 { 00:24:37.567 "params": { 00:24:37.567 "name": "Nvme$subsystem", 00:24:37.567 "trtype": "$TEST_TRANSPORT", 00:24:37.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.567 "adrfam": "ipv4", 00:24:37.567 "trsvcid": "$NVMF_PORT", 00:24:37.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.567 "hdgst": ${hdgst:-false}, 00:24:37.567 "ddgst": ${ddgst:-false} 00:24:37.567 }, 00:24:37.567 "method": "bdev_nvme_attach_controller" 00:24:37.567 } 00:24:37.567 EOF 00:24:37.567 )") 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:37.567 [2024-05-15 19:40:03.549148] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:24:37.567 [2024-05-15 19:40:03.549202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3681465 ] 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:37.567 { 00:24:37.567 "params": { 00:24:37.567 "name": "Nvme$subsystem", 00:24:37.567 "trtype": "$TEST_TRANSPORT", 00:24:37.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.567 "adrfam": "ipv4", 00:24:37.567 "trsvcid": "$NVMF_PORT", 00:24:37.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.567 "hdgst": ${hdgst:-false}, 00:24:37.567 "ddgst": ${ddgst:-false} 00:24:37.567 }, 00:24:37.567 "method": "bdev_nvme_attach_controller" 00:24:37.567 } 00:24:37.567 EOF 00:24:37.567 )") 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:37.567 { 00:24:37.567 "params": { 00:24:37.567 "name": "Nvme$subsystem", 00:24:37.567 "trtype": "$TEST_TRANSPORT", 00:24:37.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.567 "adrfam": "ipv4", 00:24:37.567 "trsvcid": "$NVMF_PORT", 00:24:37.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.567 "hdgst": ${hdgst:-false}, 00:24:37.567 "ddgst": ${ddgst:-false} 00:24:37.567 }, 00:24:37.567 "method": "bdev_nvme_attach_controller" 00:24:37.567 } 00:24:37.567 EOF 00:24:37.567 )") 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:37.567 { 00:24:37.567 "params": { 00:24:37.567 "name": "Nvme$subsystem", 00:24:37.567 "trtype": "$TEST_TRANSPORT", 00:24:37.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.567 "adrfam": "ipv4", 00:24:37.567 "trsvcid": "$NVMF_PORT", 00:24:37.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.567 "hdgst": ${hdgst:-false}, 00:24:37.567 "ddgst": ${ddgst:-false} 00:24:37.567 }, 00:24:37.567 "method": "bdev_nvme_attach_controller" 00:24:37.567 } 00:24:37.567 EOF 00:24:37.567 )") 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:37.567 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:37.567 { 00:24:37.567 "params": { 00:24:37.567 "name": "Nvme$subsystem", 00:24:37.567 "trtype": "$TEST_TRANSPORT", 00:24:37.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.567 "adrfam": "ipv4", 00:24:37.567 "trsvcid": "$NVMF_PORT", 00:24:37.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.567 "hdgst": ${hdgst:-false}, 00:24:37.568 "ddgst": ${ddgst:-false} 00:24:37.568 }, 00:24:37.568 "method": "bdev_nvme_attach_controller" 00:24:37.568 } 00:24:37.568 EOF 00:24:37.568 )") 00:24:37.568 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:37.568 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.568 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:24:37.568 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:24:37.568 19:40:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:37.568 "params": { 00:24:37.568 "name": "Nvme1", 00:24:37.568 "trtype": "tcp", 00:24:37.568 "traddr": "10.0.0.2", 00:24:37.568 "adrfam": "ipv4", 00:24:37.568 "trsvcid": "4420", 00:24:37.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:37.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:37.568 "hdgst": false, 00:24:37.568 "ddgst": false 00:24:37.568 }, 00:24:37.568 "method": "bdev_nvme_attach_controller" 00:24:37.568 },{ 00:24:37.568 "params": { 00:24:37.568 "name": "Nvme2", 00:24:37.568 "trtype": "tcp", 00:24:37.568 "traddr": "10.0.0.2", 00:24:37.568 "adrfam": "ipv4", 00:24:37.568 "trsvcid": "4420", 00:24:37.568 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:37.568 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:37.568 "hdgst": false, 00:24:37.568 "ddgst": false 00:24:37.568 }, 00:24:37.568 "method": "bdev_nvme_attach_controller" 00:24:37.568 },{ 00:24:37.568 "params": { 00:24:37.568 "name": "Nvme3", 00:24:37.568 "trtype": "tcp", 00:24:37.568 "traddr": "10.0.0.2", 00:24:37.568 "adrfam": "ipv4", 00:24:37.568 "trsvcid": "4420", 00:24:37.568 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:37.568 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:37.568 "hdgst": false, 00:24:37.568 "ddgst": false 00:24:37.568 }, 00:24:37.568 "method": "bdev_nvme_attach_controller" 00:24:37.568 },{ 00:24:37.568 "params": { 00:24:37.568 "name": "Nvme4", 00:24:37.568 "trtype": "tcp", 00:24:37.568 "traddr": "10.0.0.2", 00:24:37.568 "adrfam": "ipv4", 00:24:37.568 "trsvcid": "4420", 00:24:37.568 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:37.568 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:37.568 "hdgst": false, 00:24:37.568 "ddgst": false 00:24:37.568 }, 00:24:37.568 "method": "bdev_nvme_attach_controller" 00:24:37.568 },{ 00:24:37.568 "params": { 00:24:37.568 "name": "Nvme5", 00:24:37.568 "trtype": "tcp", 00:24:37.568 "traddr": "10.0.0.2", 00:24:37.568 "adrfam": "ipv4", 00:24:37.568 "trsvcid": "4420", 00:24:37.568 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:37.568 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:37.568 "hdgst": false, 00:24:37.568 "ddgst": false 00:24:37.568 }, 00:24:37.568 "method": "bdev_nvme_attach_controller" 00:24:37.568 },{ 00:24:37.568 "params": { 00:24:37.568 "name": "Nvme6", 00:24:37.568 "trtype": "tcp", 00:24:37.568 "traddr": "10.0.0.2", 00:24:37.568 "adrfam": "ipv4", 00:24:37.568 "trsvcid": "4420", 00:24:37.568 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:37.568 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:37.568 "hdgst": false, 00:24:37.568 "ddgst": false 00:24:37.568 }, 00:24:37.568 "method": "bdev_nvme_attach_controller" 00:24:37.568 },{ 00:24:37.568 "params": { 00:24:37.568 "name": "Nvme7", 00:24:37.568 "trtype": "tcp", 00:24:37.568 "traddr": "10.0.0.2", 00:24:37.568 "adrfam": "ipv4", 00:24:37.568 "trsvcid": "4420", 00:24:37.568 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:37.568 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:37.568 "hdgst": false, 00:24:37.568 "ddgst": false 00:24:37.568 }, 00:24:37.568 "method": "bdev_nvme_attach_controller" 00:24:37.568 },{ 00:24:37.568 "params": { 00:24:37.568 "name": "Nvme8", 00:24:37.568 "trtype": "tcp", 00:24:37.568 "traddr": "10.0.0.2", 00:24:37.568 "adrfam": "ipv4", 00:24:37.568 "trsvcid": "4420", 00:24:37.568 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:37.568 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:37.568 "hdgst": false, 00:24:37.568 "ddgst": false 00:24:37.568 }, 00:24:37.568 "method": "bdev_nvme_attach_controller" 00:24:37.568 },{ 00:24:37.568 "params": { 00:24:37.568 "name": "Nvme9", 00:24:37.568 "trtype": "tcp", 00:24:37.568 "traddr": "10.0.0.2", 00:24:37.568 "adrfam": "ipv4", 00:24:37.568 "trsvcid": "4420", 00:24:37.568 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:37.568 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:37.568 "hdgst": false, 00:24:37.568 "ddgst": false 00:24:37.568 }, 00:24:37.568 "method": "bdev_nvme_attach_controller" 00:24:37.568 },{ 00:24:37.568 "params": { 00:24:37.568 "name": "Nvme10", 00:24:37.568 "trtype": "tcp", 00:24:37.568 "traddr": "10.0.0.2", 00:24:37.568 "adrfam": "ipv4", 00:24:37.568 "trsvcid": "4420", 00:24:37.568 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:37.568 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:37.568 "hdgst": false, 00:24:37.568 "ddgst": false 00:24:37.568 }, 00:24:37.568 "method": "bdev_nvme_attach_controller" 00:24:37.568 }' 00:24:37.568 [2024-05-15 19:40:03.631857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.568 [2024-05-15 19:40:03.697051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.483 Running I/O for 10 seconds... 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:39.483 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:39.744 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:39.744 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:39.744 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:39.744 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:39.744 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.744 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:39.744 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.744 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:39.744 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:39.744 19:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:40.005 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:40.005 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:40.005 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:40.005 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:40.005 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.005 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:40.284 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.284 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:24:40.284 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:24:40.284 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:24:40.284 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:24:40.284 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:24:40.284 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3681365 00:24:40.284 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 3681365 ']' 00:24:40.284 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 3681365 00:24:40.284 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:24:40.284 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:40.284 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3681365 00:24:40.284 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:40.284 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:40.284 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3681365' 00:24:40.284 killing process with pid 3681365 00:24:40.284 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 3681365 00:24:40.284 [2024-05-15 19:40:06.262928] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:40.284 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 3681365 00:24:40.284 [2024-05-15 19:40:06.264282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.284 [2024-05-15 19:40:06.264553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.264729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810480 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.265929] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.265946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.265953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.265959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.265966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.265973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.265980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.265987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.265993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.265999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.285 [2024-05-15 19:40:06.266200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266220] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266252] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.266356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180df60 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.286 [2024-05-15 19:40:06.267916] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.267922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.267929] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.267936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.267943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.267949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.267956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.267962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.267968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.267976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.267982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.267989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.267995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.268002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.268008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.268014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.268021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.268027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.268033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.268040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.268046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.268053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.268059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e420 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269421] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.287 [2024-05-15 19:40:06.269693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.269698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ed60 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.270880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f200 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.271793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f6a0 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.271811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f6a0 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.271818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f6a0 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.271825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f6a0 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.271831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f6a0 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.271838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f6a0 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.272273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.272295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.272302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.272309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.272322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.272329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.272336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.272342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.272349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.272356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.272363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.272369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.272377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.272383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.288 [2024-05-15 19:40:06.272390] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.272711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fb40 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.289 [2024-05-15 19:40:06.273425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.273615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180ffe0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.279854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.290 [2024-05-15 19:40:06.279892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.290 [2024-05-15 19:40:06.279908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.290 [2024-05-15 19:40:06.279922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.290 [2024-05-15 19:40:06.279935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.290 [2024-05-15 19:40:06.279948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.290 [2024-05-15 19:40:06.279962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.290 [2024-05-15 19:40:06.279975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.290 [2024-05-15 19:40:06.279988] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac800 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.280029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.290 [2024-05-15 19:40:06.280042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.290 [2024-05-15 19:40:06.280054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.290 [2024-05-15 19:40:06.280062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.290 [2024-05-15 19:40:06.280070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.290 [2024-05-15 19:40:06.280076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.290 [2024-05-15 19:40:06.280084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.290 [2024-05-15 19:40:06.280092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.290 [2024-05-15 19:40:06.280099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134cc00 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.280131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.290 [2024-05-15 19:40:06.280145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.290 [2024-05-15 19:40:06.280158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.290 [2024-05-15 19:40:06.280171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.290 [2024-05-15 19:40:06.280185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.290 [2024-05-15 19:40:06.280198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.290 [2024-05-15 19:40:06.280212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.290 [2024-05-15 19:40:06.280230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.290 [2024-05-15 19:40:06.280242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134a2c0 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.280276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.290 [2024-05-15 19:40:06.280285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.290 [2024-05-15 19:40:06.280294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.290 [2024-05-15 19:40:06.280301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.290 [2024-05-15 19:40:06.280309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.290 [2024-05-15 19:40:06.280323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.290 [2024-05-15 19:40:06.280331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.290 [2024-05-15 19:40:06.280338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.290 [2024-05-15 19:40:06.280345] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4610 is same with the state(5) to be set 00:24:40.290 [2024-05-15 19:40:06.280370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.290 [2024-05-15 19:40:06.280379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.290 [2024-05-15 19:40:06.280387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.290 [2024-05-15 19:40:06.280394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.290 [2024-05-15 19:40:06.280402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.290 [2024-05-15 19:40:06.280410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.290 [2024-05-15 19:40:06.280418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.290 [2024-05-15 19:40:06.280425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.280432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bf7c0 is same with the state(5) to be set 00:24:40.291 [2024-05-15 19:40:06.280454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.291 [2024-05-15 19:40:06.280462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.280470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.291 [2024-05-15 19:40:06.280478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.280490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.291 [2024-05-15 19:40:06.280507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.280520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.291 [2024-05-15 19:40:06.280531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.280542] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1ce0 is same with the state(5) to be set 00:24:40.291 [2024-05-15 19:40:06.280568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.291 [2024-05-15 19:40:06.280576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.280584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.291 [2024-05-15 19:40:06.280593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.280601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.291 [2024-05-15 19:40:06.280609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.280616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.291 [2024-05-15 19:40:06.280623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.280630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129dee0 is same with the state(5) to be set 00:24:40.291 [2024-05-15 19:40:06.280653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.291 [2024-05-15 19:40:06.280661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.280670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.291 [2024-05-15 19:40:06.280677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.280686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.291 [2024-05-15 19:40:06.280694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.280702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.291 [2024-05-15 19:40:06.280709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.280716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1468690 is same with the state(5) to be set 00:24:40.291 [2024-05-15 19:40:06.280738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.291 [2024-05-15 19:40:06.280746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.280755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.291 [2024-05-15 19:40:06.280762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.280772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.291 [2024-05-15 19:40:06.280779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.280788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.291 [2024-05-15 19:40:06.280795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.280803] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2370 is same with the state(5) to be set 00:24:40.291 [2024-05-15 19:40:06.280826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.291 [2024-05-15 19:40:06.280834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.280843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.291 [2024-05-15 19:40:06.280850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.280858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.291 [2024-05-15 19:40:06.280865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.280873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.291 [2024-05-15 19:40:06.280881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.280890] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129abc0 is same with the state(5) to be set 00:24:40.291 [2024-05-15 19:40:06.281337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.291 [2024-05-15 19:40:06.281358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.281373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.291 [2024-05-15 19:40:06.281382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.281392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.291 [2024-05-15 19:40:06.281399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.281409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.291 [2024-05-15 19:40:06.281416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.281425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.291 [2024-05-15 19:40:06.281432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.281443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.291 [2024-05-15 19:40:06.281453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.281463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.291 [2024-05-15 19:40:06.281470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.281480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.291 [2024-05-15 19:40:06.281487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.281496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.291 [2024-05-15 19:40:06.281504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.281513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.291 [2024-05-15 19:40:06.281520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.281529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.291 [2024-05-15 19:40:06.281537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.281547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.291 [2024-05-15 19:40:06.281554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.281563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.291 [2024-05-15 19:40:06.281570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.281579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.291 [2024-05-15 19:40:06.281586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.281595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.291 [2024-05-15 19:40:06.281603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.281612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.291 [2024-05-15 19:40:06.281619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.281629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.291 [2024-05-15 19:40:06.281636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.291 [2024-05-15 19:40:06.281645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.291 [2024-05-15 19:40:06.281652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.281663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.281670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.281679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.281687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.281697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.281705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.281714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.281721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.281731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.281738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.281748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.281756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.281765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.281773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.281782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.281790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.281799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.281806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.281815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.281823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.281832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.281839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.281848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.281856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.281865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.281873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.281884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.281891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.281900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.281907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.281917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.281924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.281934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.281940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.281950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.281957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.281966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.281975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.281985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.281992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.282002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.282009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.282018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.282026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.282035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.282042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.282052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.282059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.282068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.282075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.282086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.282093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.282102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.282110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.282120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.282127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.282136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.282143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.282152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.282159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.282168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.282175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.282184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.282191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.282200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.282207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.282217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.282226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.282236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.282243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.282252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.282259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.282270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.282278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.282288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.282297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.282306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.282317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.282326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.282334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.292 [2024-05-15 19:40:06.282343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.292 [2024-05-15 19:40:06.282350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.282359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.282367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.282377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.282384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.282393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.282400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.282410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.282417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.282426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.282433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.282460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.293 [2024-05-15 19:40:06.282505] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13c8610 was disconnected and freed. reset controller. 00:24:40.293 [2024-05-15 19:40:06.283220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.293 [2024-05-15 19:40:06.283688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.293 [2024-05-15 19:40:06.283697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.283704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.283715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.283722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.283731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.291940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.294 [2024-05-15 19:40:06.291947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.294 [2024-05-15 19:40:06.292004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.294 [2024-05-15 19:40:06.292054] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13cc240 was disconnected and freed. reset controller. 00:24:40.294 [2024-05-15 19:40:06.293212] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ac800 (9): Bad file descriptor 00:24:40.294 [2024-05-15 19:40:06.293248] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134cc00 (9): Bad file descriptor 00:24:40.294 [2024-05-15 19:40:06.293274] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134a2c0 (9): Bad file descriptor 00:24:40.294 [2024-05-15 19:40:06.293298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda4610 (9): Bad file descriptor 00:24:40.294 [2024-05-15 19:40:06.293326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12bf7c0 (9): Bad file descriptor 00:24:40.294 [2024-05-15 19:40:06.293350] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e1ce0 (9): Bad file descriptor 00:24:40.294 [2024-05-15 19:40:06.293370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129dee0 (9): Bad file descriptor 00:24:40.294 [2024-05-15 19:40:06.293389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1468690 (9): Bad file descriptor 00:24:40.295 [2024-05-15 19:40:06.293412] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e2370 (9): Bad file descriptor 00:24:40.295 [2024-05-15 19:40:06.293431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129abc0 (9): Bad file descriptor 00:24:40.295 [2024-05-15 19:40:06.294691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.294710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.294728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.294738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.294750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.294760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.294773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.294783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.294794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.294801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.294811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.294819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.294829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.294838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.294847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.294855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.294865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.294874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.294888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.294897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.294906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.294914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.294923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.294931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.294940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.294948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.294958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.294965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.294976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.294983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.294992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.295 [2024-05-15 19:40:06.295404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.295 [2024-05-15 19:40:06.295414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.295839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.295900] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13cad00 was disconnected and freed. reset controller. 00:24:40.296 [2024-05-15 19:40:06.297201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.297217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.297232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.297241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.297252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.297261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.297273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.297282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.297294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.297303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.297319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.297329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.297340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.297350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.297365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.297373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.297383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.297390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.297400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.297408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.297417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.297425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.297434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.297441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.297451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.296 [2024-05-15 19:40:06.297459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.296 [2024-05-15 19:40:06.297468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.297990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.297998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.298007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.298019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.298029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.298036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.298046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.298053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.298063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.298070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.298079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.298087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.298096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.298103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.298114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.298121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.298130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.297 [2024-05-15 19:40:06.298137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.297 [2024-05-15 19:40:06.298149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.298 [2024-05-15 19:40:06.298157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.298 [2024-05-15 19:40:06.298167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.298 [2024-05-15 19:40:06.298175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.298 [2024-05-15 19:40:06.298185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.298 [2024-05-15 19:40:06.298192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.298 [2024-05-15 19:40:06.298201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.298 [2024-05-15 19:40:06.298209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.298 [2024-05-15 19:40:06.298218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.298 [2024-05-15 19:40:06.298225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.298 [2024-05-15 19:40:06.298236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.298 [2024-05-15 19:40:06.298244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.298 [2024-05-15 19:40:06.298253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.298 [2024-05-15 19:40:06.298261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.298 [2024-05-15 19:40:06.298270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.298 [2024-05-15 19:40:06.298278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.298 [2024-05-15 19:40:06.298288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.298 [2024-05-15 19:40:06.298295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.298 [2024-05-15 19:40:06.298305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.298 [2024-05-15 19:40:06.298316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.298 [2024-05-15 19:40:06.298325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.298 [2024-05-15 19:40:06.298333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.298 [2024-05-15 19:40:06.298343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.298 [2024-05-15 19:40:06.298350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.298 [2024-05-15 19:40:06.298401] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1297c30 was disconnected and freed. reset controller. 00:24:40.298 [2024-05-15 19:40:06.298537] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.298 [2024-05-15 19:40:06.301218] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:40.298 [2024-05-15 19:40:06.301644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.298 [2024-05-15 19:40:06.302063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.298 [2024-05-15 19:40:06.302077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129dee0 with addr=10.0.0.2, port=4420 00:24:40.298 [2024-05-15 19:40:06.302088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129dee0 is same with the state(5) to be set 00:24:40.298 [2024-05-15 19:40:06.302831] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:40.298 [2024-05-15 19:40:06.302882] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:40.298 [2024-05-15 19:40:06.302920] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:40.298 [2024-05-15 19:40:06.302934] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:40.298 [2024-05-15 19:40:06.302948] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:40.298 [2024-05-15 19:40:06.303567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.298 [2024-05-15 19:40:06.303904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.298 [2024-05-15 19:40:06.303918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129abc0 with addr=10.0.0.2, port=4420 00:24:40.298 [2024-05-15 19:40:06.303933] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129abc0 is same with the state(5) to be set 00:24:40.298 [2024-05-15 19:40:06.303948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129dee0 (9): Bad file descriptor 00:24:40.298 [2024-05-15 19:40:06.304030] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:40.298 [2024-05-15 19:40:06.304342] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:40.298 [2024-05-15 19:40:06.304384] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:40.298 [2024-05-15 19:40:06.304782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.298 [2024-05-15 19:40:06.305180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.298 [2024-05-15 19:40:06.305193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ac800 with addr=10.0.0.2, port=4420 00:24:40.298 [2024-05-15 19:40:06.305201] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac800 is same with the state(5) to be set 00:24:40.298 [2024-05-15 19:40:06.305613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.298 [2024-05-15 19:40:06.306028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.298 [2024-05-15 19:40:06.306039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda4610 with addr=10.0.0.2, port=4420 00:24:40.298 [2024-05-15 19:40:06.306047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4610 is same with the state(5) to be set 00:24:40.298 [2024-05-15 19:40:06.306057] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129abc0 (9): Bad file descriptor 00:24:40.298 [2024-05-15 19:40:06.306068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.298 [2024-05-15 19:40:06.306076] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.298 [2024-05-15 19:40:06.306084] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.298 [2024-05-15 19:40:06.306214] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.298 [2024-05-15 19:40:06.306251] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ac800 (9): Bad file descriptor 00:24:40.298 [2024-05-15 19:40:06.306260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda4610 (9): Bad file descriptor 00:24:40.298 [2024-05-15 19:40:06.306269] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:40.298 [2024-05-15 19:40:06.306275] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:40.298 [2024-05-15 19:40:06.306282] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:40.298 [2024-05-15 19:40:06.306328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.298 [2024-05-15 19:40:06.306340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.298 [2024-05-15 19:40:06.306355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.298 [2024-05-15 19:40:06.306363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.298 [2024-05-15 19:40:06.306372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.298 [2024-05-15 19:40:06.306379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.298 [2024-05-15 19:40:06.306393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.298 [2024-05-15 19:40:06.306401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.298 [2024-05-15 19:40:06.306410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.298 [2024-05-15 19:40:06.306418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.298 [2024-05-15 19:40:06.306427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.298 [2024-05-15 19:40:06.306434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.306982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.306992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.307000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.307010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.307018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.307028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.307035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.307046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.307055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.307065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.307072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.307083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.307090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.307100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.307107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.307117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.307124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.307134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.307142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.299 [2024-05-15 19:40:06.307151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.299 [2024-05-15 19:40:06.307159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.307168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.307176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.307185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.307194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.307203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.307211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.307221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.307229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.307239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.307247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.307256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.307264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.307274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.307283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.307293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.307301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.307311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.307323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.307333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.307341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.307350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.307358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.307368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.307376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.307387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.307394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.307404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.307412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.307422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.307430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.307440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.307447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.307457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.307465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.307474] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c9890 is same with the state(5) to be set 00:24:40.300 [2024-05-15 19:40:06.308746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.308761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.308773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.308786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.308797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.308806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.308819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.308829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.308840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.308849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.308859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.308867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.308877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.308885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.308895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.308902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.308912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.308920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.308929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.308937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.308947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.308955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.308965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.308974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.308984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.308992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.309002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.309010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.309021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.309029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.309039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.309048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.309057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.309065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.309075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.309083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.309092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.309100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.309110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.309117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.309127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.309135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.309144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.309152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.309161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.300 [2024-05-15 19:40:06.309169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.300 [2024-05-15 19:40:06.309179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.301 [2024-05-15 19:40:06.309898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.301 [2024-05-15 19:40:06.309905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.309916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cd6f0 is same with the state(5) to be set 00:24:40.302 [2024-05-15 19:40:06.311175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.302 [2024-05-15 19:40:06.311803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.302 [2024-05-15 19:40:06.311812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.311821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.311831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.311839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.311849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.311858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.311868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.311877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.311889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.311897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.311908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.311916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.311926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.311934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.311945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.311953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.311962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.311970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.311981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.311989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.311999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.312006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.312016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.312024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.312034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.312042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.312052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.312060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.312070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.312077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.312086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.312095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.312108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.312118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.312127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.312135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.312146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.312153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.312163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.312170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.312180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.312187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.312197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.312205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.312215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.312222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.312232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.312240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.312250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.312257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.312267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.312275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.312285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.312292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.312302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.312309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.312323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.312331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.312343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.312350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.312360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.312368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.312377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1296730 is same with the state(5) to be set 00:24:40.303 [2024-05-15 19:40:06.313656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.313671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.313684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.313693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.313704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.313712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.313723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.313732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.313743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.313750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.313759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.313767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.313776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.313784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.313793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.313800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.313809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.313816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.303 [2024-05-15 19:40:06.313826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.303 [2024-05-15 19:40:06.313833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.313845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.313853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.313862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.313869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.313879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.313887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.313897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.313905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.313914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.313923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.313934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.313943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.313954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.313963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.313974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.313982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.313992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.304 [2024-05-15 19:40:06.314585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.304 [2024-05-15 19:40:06.314592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.314603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.314610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.314620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.314628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.314638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.314646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.314657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.314664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.314674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.314681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.314693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.314700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.314710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.314718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.314728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.314736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.314745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.314753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.314763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.314772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.314781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.314789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.314798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.314806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.314816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.314823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.314832] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1299130 is same with the state(5) to be set 00:24:40.305 [2024-05-15 19:40:06.316091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.305 [2024-05-15 19:40:06.316578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.305 [2024-05-15 19:40:06.316588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.316983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.316992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.317000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.317010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.317018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.317028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.317035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.317045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.317053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.317064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.317071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.317081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.317089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.317099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.317106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.317116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.317124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.317134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.317143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.317153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.317160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.317170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.317177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.317187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.317194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.317204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.317211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.306 [2024-05-15 19:40:06.317221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.306 [2024-05-15 19:40:06.317228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.317236] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129a4b0 is same with the state(5) to be set 00:24:40.307 [2024-05-15 19:40:06.318742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.318762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.318774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.318781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.318791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.318798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.318808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.318815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.318824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.318832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.318842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.318849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.318859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.318870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.318879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.318887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.318897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.318904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.318914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.318922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.318931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.318939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.318948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.318955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.318966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.318973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.318983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.318991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.307 [2024-05-15 19:40:06.319413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.307 [2024-05-15 19:40:06.319421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.308 [2024-05-15 19:40:06.319878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.308 [2024-05-15 19:40:06.319887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144b1e0 is same with the state(5) to be set 00:24:40.308 [2024-05-15 19:40:06.321819] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.308 [2024-05-15 19:40:06.321841] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:40.308 [2024-05-15 19:40:06.321852] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:40.308 [2024-05-15 19:40:06.321861] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:40.308 [2024-05-15 19:40:06.321893] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:40.308 [2024-05-15 19:40:06.321900] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:40.308 [2024-05-15 19:40:06.321908] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:40.308 [2024-05-15 19:40:06.321922] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:40.308 [2024-05-15 19:40:06.321929] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:40.308 [2024-05-15 19:40:06.321935] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:40.308 [2024-05-15 19:40:06.321974] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:40.308 [2024-05-15 19:40:06.321987] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:40.308 [2024-05-15 19:40:06.322001] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:40.308 [2024-05-15 19:40:06.322015] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:40.308 [2024-05-15 19:40:06.322027] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:40.308 [2024-05-15 19:40:06.322093] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:40.308 [2024-05-15 19:40:06.322104] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:40.308 task offset: 26496 on job bdev=Nvme1n1 fails 00:24:40.308 00:24:40.308 Latency(us) 00:24:40.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.308 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:40.308 Job: Nvme1n1 ended in about 0.94 seconds with error 00:24:40.308 Verification LBA range: start 0x0 length 0x400 00:24:40.308 Nvme1n1 : 0.94 203.23 12.70 67.74 0.00 233461.23 12451.84 270882.13 00:24:40.308 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:40.308 Job: Nvme2n1 ended in about 0.96 seconds with error 00:24:40.308 Verification LBA range: start 0x0 length 0x400 00:24:40.308 Nvme2n1 : 0.96 133.50 8.34 66.75 0.00 309582.79 18786.99 244667.73 00:24:40.308 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:40.308 Job: Nvme3n1 ended in about 0.95 seconds with error 00:24:40.308 Verification LBA range: start 0x0 length 0x400 00:24:40.308 Nvme3n1 : 0.95 209.48 13.09 67.37 0.00 219036.43 22173.01 227191.47 00:24:40.308 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:40.308 Job: Nvme4n1 ended in about 0.95 seconds with error 00:24:40.308 Verification LBA range: start 0x0 length 0x400 00:24:40.308 Nvme4n1 : 0.95 202.70 12.67 67.57 0.00 219509.55 12997.97 267386.88 00:24:40.308 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:40.308 Job: Nvme5n1 ended in about 0.96 seconds with error 00:24:40.308 Verification LBA range: start 0x0 length 0x400 00:24:40.308 Nvme5n1 : 0.96 133.17 8.32 66.58 0.00 290891.09 22609.92 276125.01 00:24:40.308 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:40.308 Job: Nvme6n1 ended in about 0.96 seconds with error 00:24:40.308 Verification LBA range: start 0x0 length 0x400 00:24:40.309 Nvme6n1 : 0.96 132.83 8.30 66.41 0.00 285275.88 22719.15 262144.00 00:24:40.309 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:40.309 Job: Nvme7n1 ended in about 0.95 seconds with error 00:24:40.309 Verification LBA range: start 0x0 length 0x400 00:24:40.309 Nvme7n1 : 0.95 201.85 12.62 67.28 0.00 205932.80 7154.35 232434.35 00:24:40.309 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:40.309 Job: Nvme8n1 ended in about 0.97 seconds with error 00:24:40.309 Verification LBA range: start 0x0 length 0x400 00:24:40.309 Nvme8n1 : 0.97 198.74 12.42 66.25 0.00 204781.44 24794.45 251658.24 00:24:40.309 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:40.309 Job: Nvme9n1 ended in about 0.97 seconds with error 00:24:40.309 Verification LBA range: start 0x0 length 0x400 00:24:40.309 Nvme9n1 : 0.97 132.16 8.26 66.08 0.00 267459.41 23156.05 258648.75 00:24:40.309 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:40.309 Job: Nvme10n1 ended in about 0.97 seconds with error 00:24:40.309 Verification LBA range: start 0x0 length 0x400 00:24:40.309 Nvme10n1 : 0.97 131.80 8.24 65.90 0.00 261939.77 21736.11 286610.77 00:24:40.309 =================================================================================================================== 00:24:40.309 Total : 1679.46 104.97 667.94 0.00 244957.07 7154.35 286610.77 00:24:40.309 [2024-05-15 19:40:06.346931] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:40.309 [2024-05-15 19:40:06.346960] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:40.309 [2024-05-15 19:40:06.346973] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.309 [2024-05-15 19:40:06.346980] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.309 [2024-05-15 19:40:06.347498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.309 [2024-05-15 19:40:06.347931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.309 [2024-05-15 19:40:06.347943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1468690 with addr=10.0.0.2, port=4420 00:24:40.309 [2024-05-15 19:40:06.347952] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1468690 is same with the state(5) to be set 00:24:40.309 [2024-05-15 19:40:06.348332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.309 [2024-05-15 19:40:06.348639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.309 [2024-05-15 19:40:06.348649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12bf7c0 with addr=10.0.0.2, port=4420 00:24:40.309 [2024-05-15 19:40:06.348656] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bf7c0 is same with the state(5) to be set 00:24:40.309 [2024-05-15 19:40:06.348860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.309 [2024-05-15 19:40:06.349066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.309 [2024-05-15 19:40:06.349077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134a2c0 with addr=10.0.0.2, port=4420 00:24:40.309 [2024-05-15 19:40:06.349084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134a2c0 is same with the state(5) to be set 00:24:40.309 [2024-05-15 19:40:06.350672] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.309 [2024-05-15 19:40:06.350960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.309 [2024-05-15 19:40:06.351215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.309 [2024-05-15 19:40:06.351226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e2370 with addr=10.0.0.2, port=4420 00:24:40.309 [2024-05-15 19:40:06.351234] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2370 is same with the state(5) to be set 00:24:40.309 [2024-05-15 19:40:06.351611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.309 [2024-05-15 19:40:06.352028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.309 [2024-05-15 19:40:06.352038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134cc00 with addr=10.0.0.2, port=4420 00:24:40.309 [2024-05-15 19:40:06.352046] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134cc00 is same with the state(5) to be set 00:24:40.309 [2024-05-15 19:40:06.352275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.309 [2024-05-15 19:40:06.352654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.309 [2024-05-15 19:40:06.352665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e1ce0 with addr=10.0.0.2, port=4420 00:24:40.309 [2024-05-15 19:40:06.352672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e1ce0 is same with the state(5) to be set 00:24:40.309 [2024-05-15 19:40:06.352684] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1468690 (9): Bad file descriptor 00:24:40.309 [2024-05-15 19:40:06.352694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12bf7c0 (9): Bad file descriptor 00:24:40.309 [2024-05-15 19:40:06.352704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134a2c0 (9): Bad file descriptor 00:24:40.309 [2024-05-15 19:40:06.352735] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:40.309 [2024-05-15 19:40:06.352764] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:40.309 [2024-05-15 19:40:06.352776] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:40.309 [2024-05-15 19:40:06.352786] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:40.309 [2024-05-15 19:40:06.352848] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:40.309 [2024-05-15 19:40:06.353285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.309 [2024-05-15 19:40:06.353660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.309 [2024-05-15 19:40:06.353671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129dee0 with addr=10.0.0.2, port=4420 00:24:40.309 [2024-05-15 19:40:06.353678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129dee0 is same with the state(5) to be set 00:24:40.309 [2024-05-15 19:40:06.353687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e2370 (9): Bad file descriptor 00:24:40.309 [2024-05-15 19:40:06.353696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134cc00 (9): Bad file descriptor 00:24:40.309 [2024-05-15 19:40:06.353705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e1ce0 (9): Bad file descriptor 00:24:40.309 [2024-05-15 19:40:06.353713] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:40.309 [2024-05-15 19:40:06.353720] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:40.309 [2024-05-15 19:40:06.353728] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:40.309 [2024-05-15 19:40:06.353738] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:40.309 [2024-05-15 19:40:06.353744] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:40.309 [2024-05-15 19:40:06.353751] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:40.309 [2024-05-15 19:40:06.353760] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:40.309 [2024-05-15 19:40:06.353767] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:40.309 [2024-05-15 19:40:06.353774] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:40.309 [2024-05-15 19:40:06.353842] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:40.309 [2024-05-15 19:40:06.353853] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:40.309 [2024-05-15 19:40:06.353862] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.309 [2024-05-15 19:40:06.353869] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.309 [2024-05-15 19:40:06.353874] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.309 [2024-05-15 19:40:06.354113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.309 [2024-05-15 19:40:06.354505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.309 [2024-05-15 19:40:06.354519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129abc0 with addr=10.0.0.2, port=4420 00:24:40.309 [2024-05-15 19:40:06.354526] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129abc0 is same with the state(5) to be set 00:24:40.309 [2024-05-15 19:40:06.354539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129dee0 (9): Bad file descriptor 00:24:40.309 [2024-05-15 19:40:06.354547] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:40.309 [2024-05-15 19:40:06.354553] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:40.309 [2024-05-15 19:40:06.354560] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:40.309 [2024-05-15 19:40:06.354569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:40.309 [2024-05-15 19:40:06.354575] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:40.309 [2024-05-15 19:40:06.354582] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:40.309 [2024-05-15 19:40:06.354591] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:40.309 [2024-05-15 19:40:06.354597] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:40.309 [2024-05-15 19:40:06.354603] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:40.309 [2024-05-15 19:40:06.354633] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.309 [2024-05-15 19:40:06.354641] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.309 [2024-05-15 19:40:06.354648] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.309 [2024-05-15 19:40:06.355048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.309 [2024-05-15 19:40:06.355450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.309 [2024-05-15 19:40:06.355461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda4610 with addr=10.0.0.2, port=4420 00:24:40.309 [2024-05-15 19:40:06.355468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4610 is same with the state(5) to be set 00:24:40.309 [2024-05-15 19:40:06.355878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.309 [2024-05-15 19:40:06.356241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.309 [2024-05-15 19:40:06.356252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ac800 with addr=10.0.0.2, port=4420 00:24:40.309 [2024-05-15 19:40:06.356259] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac800 is same with the state(5) to be set 00:24:40.309 [2024-05-15 19:40:06.356268] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129abc0 (9): Bad file descriptor 00:24:40.310 [2024-05-15 19:40:06.356276] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.310 [2024-05-15 19:40:06.356283] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.310 [2024-05-15 19:40:06.356290] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.310 [2024-05-15 19:40:06.356340] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.310 [2024-05-15 19:40:06.356351] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda4610 (9): Bad file descriptor 00:24:40.310 [2024-05-15 19:40:06.356360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ac800 (9): Bad file descriptor 00:24:40.310 [2024-05-15 19:40:06.356368] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:40.310 [2024-05-15 19:40:06.356374] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:40.310 [2024-05-15 19:40:06.356381] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:40.310 [2024-05-15 19:40:06.356422] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.310 [2024-05-15 19:40:06.356432] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:40.310 [2024-05-15 19:40:06.356438] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:40.310 [2024-05-15 19:40:06.356444] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:40.310 [2024-05-15 19:40:06.356453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:40.310 [2024-05-15 19:40:06.356460] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:40.310 [2024-05-15 19:40:06.356467] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:40.310 [2024-05-15 19:40:06.356495] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.310 [2024-05-15 19:40:06.356503] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.571 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:24:40.571 19:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3681465 00:24:41.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3681465) - No such process 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:41.555 rmmod nvme_tcp 00:24:41.555 rmmod nvme_fabrics 00:24:41.555 rmmod nvme_keyring 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:41.555 19:40:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.104 19:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:44.104 00:24:44.104 real 0m7.431s 00:24:44.104 user 0m17.746s 00:24:44.104 sys 0m1.221s 00:24:44.104 19:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:44.104 19:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:44.104 ************************************ 00:24:44.104 END TEST nvmf_shutdown_tc3 00:24:44.104 ************************************ 00:24:44.104 19:40:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:24:44.104 00:24:44.104 real 0m33.824s 00:24:44.104 user 1m16.810s 00:24:44.104 sys 0m10.478s 00:24:44.104 19:40:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:44.104 19:40:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:44.104 ************************************ 00:24:44.104 END TEST nvmf_shutdown 00:24:44.104 ************************************ 00:24:44.104 19:40:09 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:24:44.104 19:40:09 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:44.104 19:40:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:44.104 19:40:09 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:24:44.104 19:40:09 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:44.104 19:40:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:44.104 19:40:09 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:24:44.104 19:40:09 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:44.104 19:40:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:44.104 19:40:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:44.104 19:40:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:44.104 ************************************ 00:24:44.104 START TEST nvmf_multicontroller 00:24:44.104 ************************************ 00:24:44.104 19:40:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:44.104 * Looking for test storage... 00:24:44.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:44.104 19:40:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:44.104 19:40:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:44.105 19:40:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.105 19:40:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.105 19:40:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.105 19:40:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.105 19:40:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.105 19:40:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.105 19:40:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.105 19:40:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.105 19:40:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.105 19:40:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:24:44.105 19:40:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:52.249 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:52.249 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:52.249 Found net devices under 0000:31:00.0: cvl_0_0 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.249 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:52.250 Found net devices under 0000:31:00.1: cvl_0_1 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.250 19:40:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:52.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:24:52.250 00:24:52.250 --- 10.0.0.2 ping statistics --- 00:24:52.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.250 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:52.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:24:52.250 00:24:52.250 --- 10.0.0.1 ping statistics --- 00:24:52.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.250 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3687100 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3687100 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3687100 ']' 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:52.250 19:40:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:52.250 [2024-05-15 19:40:18.429137] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:24:52.250 [2024-05-15 19:40:18.429200] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.510 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.510 [2024-05-15 19:40:18.506843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:52.510 [2024-05-15 19:40:18.579322] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.510 [2024-05-15 19:40:18.579363] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.510 [2024-05-15 19:40:18.579371] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.510 [2024-05-15 19:40:18.579378] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.510 [2024-05-15 19:40:18.579383] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.510 [2024-05-15 19:40:18.579504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.510 [2024-05-15 19:40:18.579663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.510 [2024-05-15 19:40:18.579663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:53.454 [2024-05-15 19:40:19.360033] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:53.454 Malloc0 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:53.454 [2024-05-15 19:40:19.423522] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:53.454 [2024-05-15 19:40:19.423745] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:53.454 [2024-05-15 19:40:19.435680] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:53.454 Malloc1 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3687193 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3687193 /var/tmp/bdevperf.sock 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3687193 ']' 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:53.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:53.454 19:40:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.397 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:54.397 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:24:54.397 19:40:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:54.397 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.397 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.397 NVMe0n1 00:24:54.397 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.397 19:40:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:54.397 19:40:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.398 1 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.398 request: 00:24:54.398 { 00:24:54.398 "name": "NVMe0", 00:24:54.398 "trtype": "tcp", 00:24:54.398 "traddr": "10.0.0.2", 00:24:54.398 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:54.398 "hostaddr": "10.0.0.2", 00:24:54.398 "hostsvcid": "60000", 00:24:54.398 "adrfam": "ipv4", 00:24:54.398 "trsvcid": "4420", 00:24:54.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.398 "method": "bdev_nvme_attach_controller", 00:24:54.398 "req_id": 1 00:24:54.398 } 00:24:54.398 Got JSON-RPC error response 00:24:54.398 response: 00:24:54.398 { 00:24:54.398 "code": -114, 00:24:54.398 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:54.398 } 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.398 request: 00:24:54.398 { 00:24:54.398 "name": "NVMe0", 00:24:54.398 "trtype": "tcp", 00:24:54.398 "traddr": "10.0.0.2", 00:24:54.398 "hostaddr": "10.0.0.2", 00:24:54.398 "hostsvcid": "60000", 00:24:54.398 "adrfam": "ipv4", 00:24:54.398 "trsvcid": "4420", 00:24:54.398 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:54.398 "method": "bdev_nvme_attach_controller", 00:24:54.398 "req_id": 1 00:24:54.398 } 00:24:54.398 Got JSON-RPC error response 00:24:54.398 response: 00:24:54.398 { 00:24:54.398 "code": -114, 00:24:54.398 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:54.398 } 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.398 request: 00:24:54.398 { 00:24:54.398 "name": "NVMe0", 00:24:54.398 "trtype": "tcp", 00:24:54.398 "traddr": "10.0.0.2", 00:24:54.398 "hostaddr": "10.0.0.2", 00:24:54.398 "hostsvcid": "60000", 00:24:54.398 "adrfam": "ipv4", 00:24:54.398 "trsvcid": "4420", 00:24:54.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.398 "multipath": "disable", 00:24:54.398 "method": "bdev_nvme_attach_controller", 00:24:54.398 "req_id": 1 00:24:54.398 } 00:24:54.398 Got JSON-RPC error response 00:24:54.398 response: 00:24:54.398 { 00:24:54.398 "code": -114, 00:24:54.398 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:54.398 } 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:54.398 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:54.659 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:54.659 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:54.659 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:54.659 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:54.659 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.659 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.659 request: 00:24:54.659 { 00:24:54.659 "name": "NVMe0", 00:24:54.659 "trtype": "tcp", 00:24:54.659 "traddr": "10.0.0.2", 00:24:54.659 "hostaddr": "10.0.0.2", 00:24:54.659 "hostsvcid": "60000", 00:24:54.659 "adrfam": "ipv4", 00:24:54.659 "trsvcid": "4420", 00:24:54.659 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.659 "multipath": "failover", 00:24:54.659 "method": "bdev_nvme_attach_controller", 00:24:54.659 "req_id": 1 00:24:54.659 } 00:24:54.659 Got JSON-RPC error response 00:24:54.659 response: 00:24:54.659 { 00:24:54.659 "code": -114, 00:24:54.659 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:54.659 } 00:24:54.659 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:54.659 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:54.659 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:54.659 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:54.659 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:54.659 19:40:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:54.659 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.659 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.659 00:24:54.659 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.659 19:40:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:54.659 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.659 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.919 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.919 19:40:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:54.920 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.920 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.920 00:24:54.920 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.920 19:40:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:54.920 19:40:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:54.920 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.920 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.920 19:40:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.920 19:40:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:54.920 19:40:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:56.308 0 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3687193 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3687193 ']' 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3687193 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3687193 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3687193' 00:24:56.308 killing process with pid 3687193 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3687193 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3687193 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:24:56.308 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:56.308 [2024-05-15 19:40:19.553567] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:24:56.308 [2024-05-15 19:40:19.553624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3687193 ] 00:24:56.308 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.308 [2024-05-15 19:40:19.639444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.308 [2024-05-15 19:40:19.704507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.308 [2024-05-15 19:40:20.952899] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 586103e0-68ee-4e6a-8d4a-c02a17e642c2 already exists 00:24:56.308 [2024-05-15 19:40:20.952930] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:586103e0-68ee-4e6a-8d4a-c02a17e642c2 alias for bdev NVMe1n1 00:24:56.308 [2024-05-15 19:40:20.952940] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:56.308 Running I/O for 1 seconds... 00:24:56.308 00:24:56.308 Latency(us) 00:24:56.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.308 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:56.308 NVMe0n1 : 1.00 20519.46 80.15 0.00 0.00 6220.90 2020.69 9120.43 00:24:56.308 =================================================================================================================== 00:24:56.308 Total : 20519.46 80.15 0.00 0.00 6220.90 2020.69 9120.43 00:24:56.308 Received shutdown signal, test time was about 1.000000 seconds 00:24:56.308 00:24:56.308 Latency(us) 00:24:56.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.308 =================================================================================================================== 00:24:56.308 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:56.308 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:56.308 rmmod nvme_tcp 00:24:56.308 rmmod nvme_fabrics 00:24:56.308 rmmod nvme_keyring 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3687100 ']' 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3687100 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3687100 ']' 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3687100 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3687100 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3687100' 00:24:56.308 killing process with pid 3687100 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3687100 00:24:56.308 [2024-05-15 19:40:22.475364] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:56.308 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3687100 00:24:56.569 19:40:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:56.569 19:40:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:56.569 19:40:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:56.569 19:40:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:56.569 19:40:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:56.569 19:40:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.569 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.569 19:40:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.128 19:40:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:59.128 00:24:59.128 real 0m14.796s 00:24:59.128 user 0m17.593s 00:24:59.128 sys 0m7.064s 00:24:59.128 19:40:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:59.128 19:40:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:59.128 ************************************ 00:24:59.128 END TEST nvmf_multicontroller 00:24:59.128 ************************************ 00:24:59.128 19:40:24 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:59.128 19:40:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:59.128 19:40:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:59.128 19:40:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:59.128 ************************************ 00:24:59.128 START TEST nvmf_aer 00:24:59.128 ************************************ 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:59.128 * Looking for test storage... 00:24:59.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:24:59.128 19:40:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:07.268 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:07.268 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:07.268 Found net devices under 0000:31:00.0: cvl_0_0 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:07.268 Found net devices under 0000:31:00.1: cvl_0_1 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.268 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:07.269 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.269 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.269 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:07.269 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:07.269 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.269 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.269 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.269 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.269 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:07.269 19:40:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:07.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:25:07.269 00:25:07.269 --- 10.0.0.2 ping statistics --- 00:25:07.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.269 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:07.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:25:07.269 00:25:07.269 --- 10.0.0.1 ping statistics --- 00:25:07.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.269 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3692548 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3692548 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 3692548 ']' 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:07.269 19:40:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:07.269 [2024-05-15 19:40:33.158397] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:25:07.269 [2024-05-15 19:40:33.158442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.269 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.269 [2024-05-15 19:40:33.252529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:07.269 [2024-05-15 19:40:33.317870] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.269 [2024-05-15 19:40:33.317907] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.269 [2024-05-15 19:40:33.317915] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:07.269 [2024-05-15 19:40:33.317924] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:07.269 [2024-05-15 19:40:33.317930] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.269 [2024-05-15 19:40:33.318038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.269 [2024-05-15 19:40:33.318057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:07.269 [2024-05-15 19:40:33.318190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.269 [2024-05-15 19:40:33.318190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.213 [2024-05-15 19:40:34.082167] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.213 Malloc0 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.213 [2024-05-15 19:40:34.141161] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:08.213 [2024-05-15 19:40:34.141405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.213 [ 00:25:08.213 { 00:25:08.213 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:08.213 "subtype": "Discovery", 00:25:08.213 "listen_addresses": [], 00:25:08.213 "allow_any_host": true, 00:25:08.213 "hosts": [] 00:25:08.213 }, 00:25:08.213 { 00:25:08.213 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.213 "subtype": "NVMe", 00:25:08.213 "listen_addresses": [ 00:25:08.213 { 00:25:08.213 "trtype": "TCP", 00:25:08.213 "adrfam": "IPv4", 00:25:08.213 "traddr": "10.0.0.2", 00:25:08.213 "trsvcid": "4420" 00:25:08.213 } 00:25:08.213 ], 00:25:08.213 "allow_any_host": true, 00:25:08.213 "hosts": [], 00:25:08.213 "serial_number": "SPDK00000000000001", 00:25:08.213 "model_number": "SPDK bdev Controller", 00:25:08.213 "max_namespaces": 2, 00:25:08.213 "min_cntlid": 1, 00:25:08.213 "max_cntlid": 65519, 00:25:08.213 "namespaces": [ 00:25:08.213 { 00:25:08.213 "nsid": 1, 00:25:08.213 "bdev_name": "Malloc0", 00:25:08.213 "name": "Malloc0", 00:25:08.213 "nguid": "D19A77CB958348E4B9838CD92B42A3D2", 00:25:08.213 "uuid": "d19a77cb-9583-48e4-b983-8cd92b42a3d2" 00:25:08.213 } 00:25:08.213 ] 00:25:08.213 } 00:25:08.213 ] 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3692792 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:25:08.213 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:25:08.213 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.214 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:08.214 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:25:08.214 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:25:08.214 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:25:08.214 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:08.214 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:25:08.214 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=3 00:25:08.214 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:25:08.474 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:08.474 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:08.474 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:25:08.474 19:40:34 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:08.474 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.474 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.474 Malloc1 00:25:08.474 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.474 19:40:34 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:08.474 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.474 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.474 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.474 19:40:34 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:08.474 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.474 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.474 Asynchronous Event Request test 00:25:08.474 Attaching to 10.0.0.2 00:25:08.474 Attached to 10.0.0.2 00:25:08.474 Registering asynchronous event callbacks... 00:25:08.474 Starting namespace attribute notice tests for all controllers... 00:25:08.474 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:08.474 aer_cb - Changed Namespace 00:25:08.474 Cleaning up... 00:25:08.474 [ 00:25:08.474 { 00:25:08.474 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:08.474 "subtype": "Discovery", 00:25:08.474 "listen_addresses": [], 00:25:08.474 "allow_any_host": true, 00:25:08.474 "hosts": [] 00:25:08.474 }, 00:25:08.474 { 00:25:08.474 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.474 "subtype": "NVMe", 00:25:08.474 "listen_addresses": [ 00:25:08.474 { 00:25:08.474 "trtype": "TCP", 00:25:08.474 "adrfam": "IPv4", 00:25:08.474 "traddr": "10.0.0.2", 00:25:08.474 "trsvcid": "4420" 00:25:08.474 } 00:25:08.474 ], 00:25:08.474 "allow_any_host": true, 00:25:08.474 "hosts": [], 00:25:08.474 "serial_number": "SPDK00000000000001", 00:25:08.474 "model_number": "SPDK bdev Controller", 00:25:08.474 "max_namespaces": 2, 00:25:08.474 "min_cntlid": 1, 00:25:08.474 "max_cntlid": 65519, 00:25:08.474 "namespaces": [ 00:25:08.474 { 00:25:08.474 "nsid": 1, 00:25:08.474 "bdev_name": "Malloc0", 00:25:08.474 "name": "Malloc0", 00:25:08.474 "nguid": "D19A77CB958348E4B9838CD92B42A3D2", 00:25:08.474 "uuid": "d19a77cb-9583-48e4-b983-8cd92b42a3d2" 00:25:08.474 }, 00:25:08.474 { 00:25:08.474 "nsid": 2, 00:25:08.474 "bdev_name": "Malloc1", 00:25:08.474 "name": "Malloc1", 00:25:08.474 "nguid": "48CF651A863C4614BF903DD89BB4EB94", 00:25:08.474 "uuid": "48cf651a-863c-4614-bf90-3dd89bb4eb94" 00:25:08.474 } 00:25:08.474 ] 00:25:08.474 } 00:25:08.474 ] 00:25:08.474 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.474 19:40:34 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3692792 00:25:08.474 19:40:34 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:08.474 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.474 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.474 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.474 19:40:34 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:08.475 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.475 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.475 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.475 19:40:34 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:08.475 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.475 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.475 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.475 19:40:34 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:08.475 19:40:34 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:08.475 19:40:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:08.475 19:40:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:25:08.475 19:40:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:08.475 19:40:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:25:08.475 19:40:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:08.475 19:40:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:08.475 rmmod nvme_tcp 00:25:08.475 rmmod nvme_fabrics 00:25:08.475 rmmod nvme_keyring 00:25:08.475 19:40:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:08.735 19:40:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:25:08.735 19:40:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:25:08.735 19:40:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3692548 ']' 00:25:08.735 19:40:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3692548 00:25:08.735 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 3692548 ']' 00:25:08.735 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 3692548 00:25:08.735 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:25:08.735 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:08.735 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3692548 00:25:08.735 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:08.735 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:08.735 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3692548' 00:25:08.735 killing process with pid 3692548 00:25:08.735 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 3692548 00:25:08.735 [2024-05-15 19:40:34.717804] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:08.735 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 3692548 00:25:08.735 19:40:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:08.735 19:40:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:08.735 19:40:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:08.735 19:40:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:08.735 19:40:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:08.736 19:40:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.736 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.736 19:40:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.283 19:40:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:11.283 00:25:11.283 real 0m12.133s 00:25:11.283 user 0m8.417s 00:25:11.283 sys 0m6.634s 00:25:11.283 19:40:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:11.283 19:40:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:11.283 ************************************ 00:25:11.283 END TEST nvmf_aer 00:25:11.283 ************************************ 00:25:11.283 19:40:36 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:11.283 19:40:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:11.283 19:40:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:11.283 19:40:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:11.283 ************************************ 00:25:11.283 START TEST nvmf_async_init 00:25:11.283 ************************************ 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:11.283 * Looking for test storage... 00:25:11.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:11.283 19:40:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:11.284 19:40:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=9a151ad11d50461aae75226b645bd20c 00:25:11.284 19:40:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:11.284 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:11.284 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.284 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:11.284 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:11.284 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:11.284 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.284 19:40:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:11.284 19:40:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.284 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:11.284 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:11.284 19:40:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:25:11.284 19:40:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:19.431 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.431 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:19.432 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:19.432 Found net devices under 0000:31:00.0: cvl_0_0 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:19.432 Found net devices under 0000:31:00.1: cvl_0_1 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:19.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:19.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:25:19.432 00:25:19.432 --- 10.0.0.2 ping statistics --- 00:25:19.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.432 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:19.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:19.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:25:19.432 00:25:19.432 --- 10.0.0.1 ping statistics --- 00:25:19.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.432 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3697577 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3697577 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 3697577 ']' 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:19.432 19:40:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:19.432 [2024-05-15 19:40:45.517384] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:25:19.432 [2024-05-15 19:40:45.517447] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:19.432 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.694 [2024-05-15 19:40:45.615608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.694 [2024-05-15 19:40:45.710194] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:19.694 [2024-05-15 19:40:45.710252] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:19.694 [2024-05-15 19:40:45.710261] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:19.694 [2024-05-15 19:40:45.710268] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:19.694 [2024-05-15 19:40:45.710274] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:19.694 [2024-05-15 19:40:45.710304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.267 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:20.267 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:25:20.267 19:40:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:20.268 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:20.268 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.268 19:40:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.529 [2024-05-15 19:40:46.458290] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.529 null0 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 9a151ad11d50461aae75226b645bd20c 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.529 [2024-05-15 19:40:46.518389] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:20.529 [2024-05-15 19:40:46.518658] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.529 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.791 nvme0n1 00:25:20.791 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.791 19:40:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:20.791 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.791 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.791 [ 00:25:20.791 { 00:25:20.791 "name": "nvme0n1", 00:25:20.791 "aliases": [ 00:25:20.791 "9a151ad1-1d50-461a-ae75-226b645bd20c" 00:25:20.791 ], 00:25:20.791 "product_name": "NVMe disk", 00:25:20.791 "block_size": 512, 00:25:20.791 "num_blocks": 2097152, 00:25:20.791 "uuid": "9a151ad1-1d50-461a-ae75-226b645bd20c", 00:25:20.791 "assigned_rate_limits": { 00:25:20.791 "rw_ios_per_sec": 0, 00:25:20.791 "rw_mbytes_per_sec": 0, 00:25:20.791 "r_mbytes_per_sec": 0, 00:25:20.791 "w_mbytes_per_sec": 0 00:25:20.791 }, 00:25:20.791 "claimed": false, 00:25:20.791 "zoned": false, 00:25:20.791 "supported_io_types": { 00:25:20.791 "read": true, 00:25:20.791 "write": true, 00:25:20.791 "unmap": false, 00:25:20.791 "write_zeroes": true, 00:25:20.791 "flush": true, 00:25:20.791 "reset": true, 00:25:20.791 "compare": true, 00:25:20.791 "compare_and_write": true, 00:25:20.791 "abort": true, 00:25:20.791 "nvme_admin": true, 00:25:20.791 "nvme_io": true 00:25:20.791 }, 00:25:20.791 "memory_domains": [ 00:25:20.791 { 00:25:20.791 "dma_device_id": "system", 00:25:20.791 "dma_device_type": 1 00:25:20.791 } 00:25:20.791 ], 00:25:20.791 "driver_specific": { 00:25:20.791 "nvme": [ 00:25:20.791 { 00:25:20.791 "trid": { 00:25:20.791 "trtype": "TCP", 00:25:20.791 "adrfam": "IPv4", 00:25:20.791 "traddr": "10.0.0.2", 00:25:20.791 "trsvcid": "4420", 00:25:20.791 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:20.791 }, 00:25:20.791 "ctrlr_data": { 00:25:20.791 "cntlid": 1, 00:25:20.791 "vendor_id": "0x8086", 00:25:20.791 "model_number": "SPDK bdev Controller", 00:25:20.791 "serial_number": "00000000000000000000", 00:25:20.791 "firmware_revision": "24.05", 00:25:20.791 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:20.791 "oacs": { 00:25:20.791 "security": 0, 00:25:20.791 "format": 0, 00:25:20.791 "firmware": 0, 00:25:20.791 "ns_manage": 0 00:25:20.791 }, 00:25:20.791 "multi_ctrlr": true, 00:25:20.791 "ana_reporting": false 00:25:20.791 }, 00:25:20.791 "vs": { 00:25:20.791 "nvme_version": "1.3" 00:25:20.791 }, 00:25:20.791 "ns_data": { 00:25:20.791 "id": 1, 00:25:20.791 "can_share": true 00:25:20.791 } 00:25:20.791 } 00:25:20.791 ], 00:25:20.791 "mp_policy": "active_passive" 00:25:20.791 } 00:25:20.791 } 00:25:20.791 ] 00:25:20.791 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.791 19:40:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:20.791 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.791 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.791 [2024-05-15 19:40:46.791804] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:20.791 [2024-05-15 19:40:46.791892] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14208e0 (9): Bad file descriptor 00:25:20.791 [2024-05-15 19:40:46.923433] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:20.791 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.791 19:40:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:20.791 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.791 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.791 [ 00:25:20.791 { 00:25:20.791 "name": "nvme0n1", 00:25:20.791 "aliases": [ 00:25:20.791 "9a151ad1-1d50-461a-ae75-226b645bd20c" 00:25:20.791 ], 00:25:20.791 "product_name": "NVMe disk", 00:25:20.791 "block_size": 512, 00:25:20.791 "num_blocks": 2097152, 00:25:20.791 "uuid": "9a151ad1-1d50-461a-ae75-226b645bd20c", 00:25:20.791 "assigned_rate_limits": { 00:25:20.791 "rw_ios_per_sec": 0, 00:25:20.791 "rw_mbytes_per_sec": 0, 00:25:20.791 "r_mbytes_per_sec": 0, 00:25:20.791 "w_mbytes_per_sec": 0 00:25:20.791 }, 00:25:20.791 "claimed": false, 00:25:20.791 "zoned": false, 00:25:20.791 "supported_io_types": { 00:25:20.791 "read": true, 00:25:20.791 "write": true, 00:25:20.791 "unmap": false, 00:25:20.791 "write_zeroes": true, 00:25:20.791 "flush": true, 00:25:20.791 "reset": true, 00:25:20.791 "compare": true, 00:25:20.791 "compare_and_write": true, 00:25:20.791 "abort": true, 00:25:20.791 "nvme_admin": true, 00:25:20.791 "nvme_io": true 00:25:20.791 }, 00:25:20.791 "memory_domains": [ 00:25:20.791 { 00:25:20.791 "dma_device_id": "system", 00:25:20.791 "dma_device_type": 1 00:25:20.791 } 00:25:20.791 ], 00:25:20.791 "driver_specific": { 00:25:20.791 "nvme": [ 00:25:20.791 { 00:25:20.791 "trid": { 00:25:20.791 "trtype": "TCP", 00:25:20.791 "adrfam": "IPv4", 00:25:20.791 "traddr": "10.0.0.2", 00:25:20.791 "trsvcid": "4420", 00:25:20.791 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:20.791 }, 00:25:20.791 "ctrlr_data": { 00:25:20.791 "cntlid": 2, 00:25:20.791 "vendor_id": "0x8086", 00:25:20.791 "model_number": "SPDK bdev Controller", 00:25:20.791 "serial_number": "00000000000000000000", 00:25:20.791 "firmware_revision": "24.05", 00:25:20.791 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:20.791 "oacs": { 00:25:20.791 "security": 0, 00:25:20.791 "format": 0, 00:25:20.791 "firmware": 0, 00:25:20.791 "ns_manage": 0 00:25:20.791 }, 00:25:20.791 "multi_ctrlr": true, 00:25:20.791 "ana_reporting": false 00:25:20.791 }, 00:25:20.791 "vs": { 00:25:20.791 "nvme_version": "1.3" 00:25:20.791 }, 00:25:20.791 "ns_data": { 00:25:20.791 "id": 1, 00:25:20.791 "can_share": true 00:25:20.791 } 00:25:20.791 } 00:25:20.791 ], 00:25:20.791 "mp_policy": "active_passive" 00:25:20.791 } 00:25:20.791 } 00:25:20.791 ] 00:25:20.791 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.791 19:40:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.791 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.791 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.791 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.791 19:40:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:20.791 19:40:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.XbtBACcQru 00:25:20.791 19:40:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:20.791 19:40:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.XbtBACcQru 00:25:21.053 19:40:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:21.053 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.053 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:21.053 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.053 19:40:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:21.053 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.053 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:21.053 [2024-05-15 19:40:46.996442] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:21.053 [2024-05-15 19:40:46.996606] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:21.053 19:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.053 19:40:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XbtBACcQru 00:25:21.053 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.053 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:21.053 [2024-05-15 19:40:47.008471] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:21.053 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.053 19:40:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XbtBACcQru 00:25:21.053 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.053 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:21.053 [2024-05-15 19:40:47.020500] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:21.053 [2024-05-15 19:40:47.020546] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:21.053 nvme0n1 00:25:21.053 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:21.054 [ 00:25:21.054 { 00:25:21.054 "name": "nvme0n1", 00:25:21.054 "aliases": [ 00:25:21.054 "9a151ad1-1d50-461a-ae75-226b645bd20c" 00:25:21.054 ], 00:25:21.054 "product_name": "NVMe disk", 00:25:21.054 "block_size": 512, 00:25:21.054 "num_blocks": 2097152, 00:25:21.054 "uuid": "9a151ad1-1d50-461a-ae75-226b645bd20c", 00:25:21.054 "assigned_rate_limits": { 00:25:21.054 "rw_ios_per_sec": 0, 00:25:21.054 "rw_mbytes_per_sec": 0, 00:25:21.054 "r_mbytes_per_sec": 0, 00:25:21.054 "w_mbytes_per_sec": 0 00:25:21.054 }, 00:25:21.054 "claimed": false, 00:25:21.054 "zoned": false, 00:25:21.054 "supported_io_types": { 00:25:21.054 "read": true, 00:25:21.054 "write": true, 00:25:21.054 "unmap": false, 00:25:21.054 "write_zeroes": true, 00:25:21.054 "flush": true, 00:25:21.054 "reset": true, 00:25:21.054 "compare": true, 00:25:21.054 "compare_and_write": true, 00:25:21.054 "abort": true, 00:25:21.054 "nvme_admin": true, 00:25:21.054 "nvme_io": true 00:25:21.054 }, 00:25:21.054 "memory_domains": [ 00:25:21.054 { 00:25:21.054 "dma_device_id": "system", 00:25:21.054 "dma_device_type": 1 00:25:21.054 } 00:25:21.054 ], 00:25:21.054 "driver_specific": { 00:25:21.054 "nvme": [ 00:25:21.054 { 00:25:21.054 "trid": { 00:25:21.054 "trtype": "TCP", 00:25:21.054 "adrfam": "IPv4", 00:25:21.054 "traddr": "10.0.0.2", 00:25:21.054 "trsvcid": "4421", 00:25:21.054 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:21.054 }, 00:25:21.054 "ctrlr_data": { 00:25:21.054 "cntlid": 3, 00:25:21.054 "vendor_id": "0x8086", 00:25:21.054 "model_number": "SPDK bdev Controller", 00:25:21.054 "serial_number": "00000000000000000000", 00:25:21.054 "firmware_revision": "24.05", 00:25:21.054 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:21.054 "oacs": { 00:25:21.054 "security": 0, 00:25:21.054 "format": 0, 00:25:21.054 "firmware": 0, 00:25:21.054 "ns_manage": 0 00:25:21.054 }, 00:25:21.054 "multi_ctrlr": true, 00:25:21.054 "ana_reporting": false 00:25:21.054 }, 00:25:21.054 "vs": { 00:25:21.054 "nvme_version": "1.3" 00:25:21.054 }, 00:25:21.054 "ns_data": { 00:25:21.054 "id": 1, 00:25:21.054 "can_share": true 00:25:21.054 } 00:25:21.054 } 00:25:21.054 ], 00:25:21.054 "mp_policy": "active_passive" 00:25:21.054 } 00:25:21.054 } 00:25:21.054 ] 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.XbtBACcQru 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:21.054 rmmod nvme_tcp 00:25:21.054 rmmod nvme_fabrics 00:25:21.054 rmmod nvme_keyring 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3697577 ']' 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3697577 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 3697577 ']' 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 3697577 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:21.054 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3697577 00:25:21.316 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:21.316 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:21.316 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3697577' 00:25:21.316 killing process with pid 3697577 00:25:21.316 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 3697577 00:25:21.316 [2024-05-15 19:40:47.282441] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:21.316 [2024-05-15 19:40:47.282483] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:21.316 [2024-05-15 19:40:47.282492] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:21.316 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 3697577 00:25:21.316 19:40:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:21.316 19:40:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:21.316 19:40:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:21.316 19:40:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:21.316 19:40:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:21.316 19:40:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.316 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.316 19:40:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.863 19:40:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:23.863 00:25:23.863 real 0m12.498s 00:25:23.863 user 0m4.392s 00:25:23.863 sys 0m6.658s 00:25:23.863 19:40:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:23.863 19:40:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:23.863 ************************************ 00:25:23.863 END TEST nvmf_async_init 00:25:23.863 ************************************ 00:25:23.863 19:40:49 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:23.863 19:40:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:23.863 19:40:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:23.863 19:40:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.863 ************************************ 00:25:23.863 START TEST dma 00:25:23.863 ************************************ 00:25:23.863 19:40:49 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:23.863 * Looking for test storage... 00:25:23.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:23.863 19:40:49 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:23.863 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:25:23.863 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.863 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.863 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.863 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.863 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.863 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.863 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.863 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.863 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.863 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.864 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:23.864 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:23.864 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.864 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.864 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:23.864 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.864 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.864 19:40:49 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.864 19:40:49 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.864 19:40:49 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.864 19:40:49 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.864 19:40:49 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.864 19:40:49 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.864 19:40:49 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:25:23.864 19:40:49 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.864 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:25:23.864 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:23.864 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:23.864 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.864 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.864 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.864 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:23.864 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:23.864 19:40:49 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:23.864 19:40:49 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:23.864 19:40:49 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:25:23.864 00:25:23.864 real 0m0.131s 00:25:23.864 user 0m0.063s 00:25:23.864 sys 0m0.076s 00:25:23.864 19:40:49 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:23.864 19:40:49 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:25:23.864 ************************************ 00:25:23.864 END TEST dma 00:25:23.864 ************************************ 00:25:23.864 19:40:49 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:23.864 19:40:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:23.864 19:40:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:23.864 19:40:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.864 ************************************ 00:25:23.864 START TEST nvmf_identify 00:25:23.864 ************************************ 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:23.864 * Looking for test storage... 00:25:23.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:25:23.864 19:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:32.042 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:32.042 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:25:32.042 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:32.042 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:32.042 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:32.042 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:32.042 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:32.042 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:25:32.042 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:32.043 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:32.043 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:32.043 Found net devices under 0000:31:00.0: cvl_0_0 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:32.043 Found net devices under 0000:31:00.1: cvl_0_1 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:32.043 19:40:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:32.043 19:40:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:32.043 19:40:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.043 19:40:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:32.043 19:40:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:32.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:25:32.304 00:25:32.304 --- 10.0.0.2 ping statistics --- 00:25:32.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.304 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:32.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:25:32.304 00:25:32.304 --- 10.0.0.1 ping statistics --- 00:25:32.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.304 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3702657 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3702657 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 3702657 ']' 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:32.304 19:40:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:32.304 [2024-05-15 19:40:58.379055] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:25:32.304 [2024-05-15 19:40:58.379106] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.304 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.304 [2024-05-15 19:40:58.468049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:32.564 [2024-05-15 19:40:58.543011] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.564 [2024-05-15 19:40:58.543061] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.564 [2024-05-15 19:40:58.543070] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:32.565 [2024-05-15 19:40:58.543078] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:32.565 [2024-05-15 19:40:58.543085] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.565 [2024-05-15 19:40:58.543207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.565 [2024-05-15 19:40:58.543348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:32.565 [2024-05-15 19:40:58.543447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.565 [2024-05-15 19:40:58.543446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:33.136 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:33.136 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:25:33.136 19:40:59 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:33.136 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.136 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:33.136 [2024-05-15 19:40:59.270061] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.136 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.136 19:40:59 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:33.136 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:33.136 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:33.136 19:40:59 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:33.136 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.136 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:33.400 Malloc0 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:33.400 [2024-05-15 19:40:59.369326] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:33.400 [2024-05-15 19:40:59.369580] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:33.400 [ 00:25:33.400 { 00:25:33.400 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:33.400 "subtype": "Discovery", 00:25:33.400 "listen_addresses": [ 00:25:33.400 { 00:25:33.400 "trtype": "TCP", 00:25:33.400 "adrfam": "IPv4", 00:25:33.400 "traddr": "10.0.0.2", 00:25:33.400 "trsvcid": "4420" 00:25:33.400 } 00:25:33.400 ], 00:25:33.400 "allow_any_host": true, 00:25:33.400 "hosts": [] 00:25:33.400 }, 00:25:33.400 { 00:25:33.400 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.400 "subtype": "NVMe", 00:25:33.400 "listen_addresses": [ 00:25:33.400 { 00:25:33.400 "trtype": "TCP", 00:25:33.400 "adrfam": "IPv4", 00:25:33.400 "traddr": "10.0.0.2", 00:25:33.400 "trsvcid": "4420" 00:25:33.400 } 00:25:33.400 ], 00:25:33.400 "allow_any_host": true, 00:25:33.400 "hosts": [], 00:25:33.400 "serial_number": "SPDK00000000000001", 00:25:33.400 "model_number": "SPDK bdev Controller", 00:25:33.400 "max_namespaces": 32, 00:25:33.400 "min_cntlid": 1, 00:25:33.400 "max_cntlid": 65519, 00:25:33.400 "namespaces": [ 00:25:33.400 { 00:25:33.400 "nsid": 1, 00:25:33.400 "bdev_name": "Malloc0", 00:25:33.400 "name": "Malloc0", 00:25:33.400 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:33.400 "eui64": "ABCDEF0123456789", 00:25:33.400 "uuid": "58861591-8b28-4d0c-a98b-5fe44a7433b7" 00:25:33.400 } 00:25:33.400 ] 00:25:33.400 } 00:25:33.400 ] 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.400 19:40:59 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:33.400 [2024-05-15 19:40:59.425836] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:25:33.400 [2024-05-15 19:40:59.425876] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3702879 ] 00:25:33.400 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.400 [2024-05-15 19:40:59.457979] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:33.400 [2024-05-15 19:40:59.458025] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:33.400 [2024-05-15 19:40:59.458030] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:33.400 [2024-05-15 19:40:59.458041] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:33.400 [2024-05-15 19:40:59.458048] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:33.400 [2024-05-15 19:40:59.461346] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:33.400 [2024-05-15 19:40:59.461380] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x17c3c30 0 00:25:33.400 [2024-05-15 19:40:59.461659] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:33.400 [2024-05-15 19:40:59.461670] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:33.400 [2024-05-15 19:40:59.461675] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:33.400 [2024-05-15 19:40:59.461678] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:33.400 [2024-05-15 19:40:59.461711] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.400 [2024-05-15 19:40:59.461717] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.400 [2024-05-15 19:40:59.461721] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17c3c30) 00:25:33.400 [2024-05-15 19:40:59.461737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:33.400 [2024-05-15 19:40:59.461752] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182b980, cid 0, qid 0 00:25:33.400 [2024-05-15 19:40:59.469325] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.400 [2024-05-15 19:40:59.469334] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.400 [2024-05-15 19:40:59.469338] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.400 [2024-05-15 19:40:59.469342] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182b980) on tqpair=0x17c3c30 00:25:33.400 [2024-05-15 19:40:59.469353] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:33.400 [2024-05-15 19:40:59.469359] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:33.400 [2024-05-15 19:40:59.469364] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:33.400 [2024-05-15 19:40:59.469376] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.400 [2024-05-15 19:40:59.469380] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.400 [2024-05-15 19:40:59.469383] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17c3c30) 00:25:33.400 [2024-05-15 19:40:59.469391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.400 [2024-05-15 19:40:59.469403] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182b980, cid 0, qid 0 00:25:33.400 [2024-05-15 19:40:59.469645] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.400 [2024-05-15 19:40:59.469651] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.400 [2024-05-15 19:40:59.469654] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.400 [2024-05-15 19:40:59.469658] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182b980) on tqpair=0x17c3c30 00:25:33.400 [2024-05-15 19:40:59.469664] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:33.400 [2024-05-15 19:40:59.469671] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:33.400 [2024-05-15 19:40:59.469677] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.400 [2024-05-15 19:40:59.469681] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.400 [2024-05-15 19:40:59.469684] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17c3c30) 00:25:33.401 [2024-05-15 19:40:59.469691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.401 [2024-05-15 19:40:59.469701] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182b980, cid 0, qid 0 00:25:33.401 [2024-05-15 19:40:59.469941] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.401 [2024-05-15 19:40:59.469947] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.401 [2024-05-15 19:40:59.469950] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.469954] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182b980) on tqpair=0x17c3c30 00:25:33.401 [2024-05-15 19:40:59.469960] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:33.401 [2024-05-15 19:40:59.469967] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:33.401 [2024-05-15 19:40:59.469974] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.469977] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.469981] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17c3c30) 00:25:33.401 [2024-05-15 19:40:59.469990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.401 [2024-05-15 19:40:59.470000] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182b980, cid 0, qid 0 00:25:33.401 [2024-05-15 19:40:59.470211] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.401 [2024-05-15 19:40:59.470218] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.401 [2024-05-15 19:40:59.470221] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.470225] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182b980) on tqpair=0x17c3c30 00:25:33.401 [2024-05-15 19:40:59.470230] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:33.401 [2024-05-15 19:40:59.470240] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.470243] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.470247] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17c3c30) 00:25:33.401 [2024-05-15 19:40:59.470253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.401 [2024-05-15 19:40:59.470263] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182b980, cid 0, qid 0 00:25:33.401 [2024-05-15 19:40:59.470496] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.401 [2024-05-15 19:40:59.470502] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.401 [2024-05-15 19:40:59.470506] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.470509] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182b980) on tqpair=0x17c3c30 00:25:33.401 [2024-05-15 19:40:59.470514] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:33.401 [2024-05-15 19:40:59.470519] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:33.401 [2024-05-15 19:40:59.470526] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:33.401 [2024-05-15 19:40:59.470631] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:33.401 [2024-05-15 19:40:59.470636] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:33.401 [2024-05-15 19:40:59.470644] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.470648] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.470651] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17c3c30) 00:25:33.401 [2024-05-15 19:40:59.470658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.401 [2024-05-15 19:40:59.470668] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182b980, cid 0, qid 0 00:25:33.401 [2024-05-15 19:40:59.470903] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.401 [2024-05-15 19:40:59.470909] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.401 [2024-05-15 19:40:59.470912] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.470915] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182b980) on tqpair=0x17c3c30 00:25:33.401 [2024-05-15 19:40:59.470921] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:33.401 [2024-05-15 19:40:59.470929] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.470933] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.470939] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17c3c30) 00:25:33.401 [2024-05-15 19:40:59.470946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.401 [2024-05-15 19:40:59.470955] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182b980, cid 0, qid 0 00:25:33.401 [2024-05-15 19:40:59.471156] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.401 [2024-05-15 19:40:59.471162] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.401 [2024-05-15 19:40:59.471165] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.471169] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182b980) on tqpair=0x17c3c30 00:25:33.401 [2024-05-15 19:40:59.471174] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:33.401 [2024-05-15 19:40:59.471178] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:33.401 [2024-05-15 19:40:59.471185] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:33.401 [2024-05-15 19:40:59.471193] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:33.401 [2024-05-15 19:40:59.471202] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.471205] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17c3c30) 00:25:33.401 [2024-05-15 19:40:59.471212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.401 [2024-05-15 19:40:59.471222] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182b980, cid 0, qid 0 00:25:33.401 [2024-05-15 19:40:59.471461] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:33.401 [2024-05-15 19:40:59.471468] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:33.401 [2024-05-15 19:40:59.471471] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.471475] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17c3c30): datao=0, datal=4096, cccid=0 00:25:33.401 [2024-05-15 19:40:59.471479] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x182b980) on tqpair(0x17c3c30): expected_datao=0, payload_size=4096 00:25:33.401 [2024-05-15 19:40:59.471484] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.471522] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.471527] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.471758] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.401 [2024-05-15 19:40:59.471764] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.401 [2024-05-15 19:40:59.471768] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.471772] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182b980) on tqpair=0x17c3c30 00:25:33.401 [2024-05-15 19:40:59.471780] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:33.401 [2024-05-15 19:40:59.471785] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:33.401 [2024-05-15 19:40:59.471790] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:33.401 [2024-05-15 19:40:59.471794] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:33.401 [2024-05-15 19:40:59.471799] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:33.401 [2024-05-15 19:40:59.471805] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:33.401 [2024-05-15 19:40:59.471816] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:33.401 [2024-05-15 19:40:59.471824] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.471828] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.471832] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17c3c30) 00:25:33.401 [2024-05-15 19:40:59.471839] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:33.401 [2024-05-15 19:40:59.471849] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182b980, cid 0, qid 0 00:25:33.401 [2024-05-15 19:40:59.472062] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.401 [2024-05-15 19:40:59.472069] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.401 [2024-05-15 19:40:59.472072] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.472076] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182b980) on tqpair=0x17c3c30 00:25:33.401 [2024-05-15 19:40:59.472086] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.472090] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.472093] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17c3c30) 00:25:33.401 [2024-05-15 19:40:59.472099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.401 [2024-05-15 19:40:59.472105] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.472109] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.472112] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x17c3c30) 00:25:33.401 [2024-05-15 19:40:59.472118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.401 [2024-05-15 19:40:59.472124] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.472128] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.472131] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x17c3c30) 00:25:33.401 [2024-05-15 19:40:59.472136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.401 [2024-05-15 19:40:59.472142] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.401 [2024-05-15 19:40:59.472146] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.472150] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c3c30) 00:25:33.402 [2024-05-15 19:40:59.472155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.402 [2024-05-15 19:40:59.472160] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:33.402 [2024-05-15 19:40:59.472167] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:33.402 [2024-05-15 19:40:59.472174] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.472177] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17c3c30) 00:25:33.402 [2024-05-15 19:40:59.472184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.402 [2024-05-15 19:40:59.472195] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182b980, cid 0, qid 0 00:25:33.402 [2024-05-15 19:40:59.472202] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182bae0, cid 1, qid 0 00:25:33.402 [2024-05-15 19:40:59.472206] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182bc40, cid 2, qid 0 00:25:33.402 [2024-05-15 19:40:59.472211] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182bda0, cid 3, qid 0 00:25:33.402 [2024-05-15 19:40:59.472216] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182bf00, cid 4, qid 0 00:25:33.402 [2024-05-15 19:40:59.472452] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.402 [2024-05-15 19:40:59.472459] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.402 [2024-05-15 19:40:59.472463] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.472466] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182bf00) on tqpair=0x17c3c30 00:25:33.402 [2024-05-15 19:40:59.472474] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:33.402 [2024-05-15 19:40:59.472479] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:33.402 [2024-05-15 19:40:59.472488] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.472492] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17c3c30) 00:25:33.402 [2024-05-15 19:40:59.472499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.402 [2024-05-15 19:40:59.472509] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182bf00, cid 4, qid 0 00:25:33.402 [2024-05-15 19:40:59.472748] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:33.402 [2024-05-15 19:40:59.472755] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:33.402 [2024-05-15 19:40:59.472758] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.472761] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17c3c30): datao=0, datal=4096, cccid=4 00:25:33.402 [2024-05-15 19:40:59.472766] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x182bf00) on tqpair(0x17c3c30): expected_datao=0, payload_size=4096 00:25:33.402 [2024-05-15 19:40:59.472770] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.472776] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.472780] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.472956] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.402 [2024-05-15 19:40:59.472962] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.402 [2024-05-15 19:40:59.472965] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.472969] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182bf00) on tqpair=0x17c3c30 00:25:33.402 [2024-05-15 19:40:59.472980] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:33.402 [2024-05-15 19:40:59.473003] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.473007] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17c3c30) 00:25:33.402 [2024-05-15 19:40:59.473014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.402 [2024-05-15 19:40:59.473020] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.473024] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.473027] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17c3c30) 00:25:33.402 [2024-05-15 19:40:59.473033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.402 [2024-05-15 19:40:59.473050] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182bf00, cid 4, qid 0 00:25:33.402 [2024-05-15 19:40:59.473055] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182c060, cid 5, qid 0 00:25:33.402 [2024-05-15 19:40:59.477322] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:33.402 [2024-05-15 19:40:59.477331] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:33.402 [2024-05-15 19:40:59.477334] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.477338] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17c3c30): datao=0, datal=1024, cccid=4 00:25:33.402 [2024-05-15 19:40:59.477342] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x182bf00) on tqpair(0x17c3c30): expected_datao=0, payload_size=1024 00:25:33.402 [2024-05-15 19:40:59.477346] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.477353] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.477356] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.477362] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.402 [2024-05-15 19:40:59.477368] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.402 [2024-05-15 19:40:59.477371] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.477374] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182c060) on tqpair=0x17c3c30 00:25:33.402 [2024-05-15 19:40:59.516322] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.402 [2024-05-15 19:40:59.516331] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.402 [2024-05-15 19:40:59.516334] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.516338] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182bf00) on tqpair=0x17c3c30 00:25:33.402 [2024-05-15 19:40:59.516349] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.516353] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17c3c30) 00:25:33.402 [2024-05-15 19:40:59.516360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.402 [2024-05-15 19:40:59.516375] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182bf00, cid 4, qid 0 00:25:33.402 [2024-05-15 19:40:59.516605] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:33.402 [2024-05-15 19:40:59.516611] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:33.402 [2024-05-15 19:40:59.516614] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.516618] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17c3c30): datao=0, datal=3072, cccid=4 00:25:33.402 [2024-05-15 19:40:59.516622] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x182bf00) on tqpair(0x17c3c30): expected_datao=0, payload_size=3072 00:25:33.402 [2024-05-15 19:40:59.516626] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.516633] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.516636] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.516800] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.402 [2024-05-15 19:40:59.516807] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.402 [2024-05-15 19:40:59.516810] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.516814] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182bf00) on tqpair=0x17c3c30 00:25:33.402 [2024-05-15 19:40:59.516823] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.516826] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17c3c30) 00:25:33.402 [2024-05-15 19:40:59.516833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.402 [2024-05-15 19:40:59.516850] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182bf00, cid 4, qid 0 00:25:33.402 [2024-05-15 19:40:59.517082] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:33.402 [2024-05-15 19:40:59.517088] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:33.402 [2024-05-15 19:40:59.517092] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.517095] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17c3c30): datao=0, datal=8, cccid=4 00:25:33.402 [2024-05-15 19:40:59.517099] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x182bf00) on tqpair(0x17c3c30): expected_datao=0, payload_size=8 00:25:33.402 [2024-05-15 19:40:59.517103] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.517110] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.517113] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.557522] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.402 [2024-05-15 19:40:59.557534] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.402 [2024-05-15 19:40:59.557538] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.402 [2024-05-15 19:40:59.557542] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182bf00) on tqpair=0x17c3c30 00:25:33.402 ===================================================== 00:25:33.402 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:33.402 ===================================================== 00:25:33.402 Controller Capabilities/Features 00:25:33.402 ================================ 00:25:33.402 Vendor ID: 0000 00:25:33.402 Subsystem Vendor ID: 0000 00:25:33.402 Serial Number: .................... 00:25:33.402 Model Number: ........................................ 00:25:33.402 Firmware Version: 24.05 00:25:33.402 Recommended Arb Burst: 0 00:25:33.402 IEEE OUI Identifier: 00 00 00 00:25:33.402 Multi-path I/O 00:25:33.402 May have multiple subsystem ports: No 00:25:33.402 May have multiple controllers: No 00:25:33.402 Associated with SR-IOV VF: No 00:25:33.402 Max Data Transfer Size: 131072 00:25:33.402 Max Number of Namespaces: 0 00:25:33.402 Max Number of I/O Queues: 1024 00:25:33.402 NVMe Specification Version (VS): 1.3 00:25:33.402 NVMe Specification Version (Identify): 1.3 00:25:33.402 Maximum Queue Entries: 128 00:25:33.402 Contiguous Queues Required: Yes 00:25:33.402 Arbitration Mechanisms Supported 00:25:33.403 Weighted Round Robin: Not Supported 00:25:33.403 Vendor Specific: Not Supported 00:25:33.403 Reset Timeout: 15000 ms 00:25:33.403 Doorbell Stride: 4 bytes 00:25:33.403 NVM Subsystem Reset: Not Supported 00:25:33.403 Command Sets Supported 00:25:33.403 NVM Command Set: Supported 00:25:33.403 Boot Partition: Not Supported 00:25:33.403 Memory Page Size Minimum: 4096 bytes 00:25:33.403 Memory Page Size Maximum: 4096 bytes 00:25:33.403 Persistent Memory Region: Not Supported 00:25:33.403 Optional Asynchronous Events Supported 00:25:33.403 Namespace Attribute Notices: Not Supported 00:25:33.403 Firmware Activation Notices: Not Supported 00:25:33.403 ANA Change Notices: Not Supported 00:25:33.403 PLE Aggregate Log Change Notices: Not Supported 00:25:33.403 LBA Status Info Alert Notices: Not Supported 00:25:33.403 EGE Aggregate Log Change Notices: Not Supported 00:25:33.403 Normal NVM Subsystem Shutdown event: Not Supported 00:25:33.403 Zone Descriptor Change Notices: Not Supported 00:25:33.403 Discovery Log Change Notices: Supported 00:25:33.403 Controller Attributes 00:25:33.403 128-bit Host Identifier: Not Supported 00:25:33.403 Non-Operational Permissive Mode: Not Supported 00:25:33.403 NVM Sets: Not Supported 00:25:33.403 Read Recovery Levels: Not Supported 00:25:33.403 Endurance Groups: Not Supported 00:25:33.403 Predictable Latency Mode: Not Supported 00:25:33.403 Traffic Based Keep ALive: Not Supported 00:25:33.403 Namespace Granularity: Not Supported 00:25:33.403 SQ Associations: Not Supported 00:25:33.403 UUID List: Not Supported 00:25:33.403 Multi-Domain Subsystem: Not Supported 00:25:33.403 Fixed Capacity Management: Not Supported 00:25:33.403 Variable Capacity Management: Not Supported 00:25:33.403 Delete Endurance Group: Not Supported 00:25:33.403 Delete NVM Set: Not Supported 00:25:33.403 Extended LBA Formats Supported: Not Supported 00:25:33.403 Flexible Data Placement Supported: Not Supported 00:25:33.403 00:25:33.403 Controller Memory Buffer Support 00:25:33.403 ================================ 00:25:33.403 Supported: No 00:25:33.403 00:25:33.403 Persistent Memory Region Support 00:25:33.403 ================================ 00:25:33.403 Supported: No 00:25:33.403 00:25:33.403 Admin Command Set Attributes 00:25:33.403 ============================ 00:25:33.403 Security Send/Receive: Not Supported 00:25:33.403 Format NVM: Not Supported 00:25:33.403 Firmware Activate/Download: Not Supported 00:25:33.403 Namespace Management: Not Supported 00:25:33.403 Device Self-Test: Not Supported 00:25:33.403 Directives: Not Supported 00:25:33.403 NVMe-MI: Not Supported 00:25:33.403 Virtualization Management: Not Supported 00:25:33.403 Doorbell Buffer Config: Not Supported 00:25:33.403 Get LBA Status Capability: Not Supported 00:25:33.403 Command & Feature Lockdown Capability: Not Supported 00:25:33.403 Abort Command Limit: 1 00:25:33.403 Async Event Request Limit: 4 00:25:33.403 Number of Firmware Slots: N/A 00:25:33.403 Firmware Slot 1 Read-Only: N/A 00:25:33.403 Firmware Activation Without Reset: N/A 00:25:33.403 Multiple Update Detection Support: N/A 00:25:33.403 Firmware Update Granularity: No Information Provided 00:25:33.403 Per-Namespace SMART Log: No 00:25:33.403 Asymmetric Namespace Access Log Page: Not Supported 00:25:33.403 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:33.403 Command Effects Log Page: Not Supported 00:25:33.403 Get Log Page Extended Data: Supported 00:25:33.403 Telemetry Log Pages: Not Supported 00:25:33.403 Persistent Event Log Pages: Not Supported 00:25:33.403 Supported Log Pages Log Page: May Support 00:25:33.403 Commands Supported & Effects Log Page: Not Supported 00:25:33.403 Feature Identifiers & Effects Log Page:May Support 00:25:33.403 NVMe-MI Commands & Effects Log Page: May Support 00:25:33.403 Data Area 4 for Telemetry Log: Not Supported 00:25:33.403 Error Log Page Entries Supported: 128 00:25:33.403 Keep Alive: Not Supported 00:25:33.403 00:25:33.403 NVM Command Set Attributes 00:25:33.403 ========================== 00:25:33.403 Submission Queue Entry Size 00:25:33.403 Max: 1 00:25:33.403 Min: 1 00:25:33.403 Completion Queue Entry Size 00:25:33.403 Max: 1 00:25:33.403 Min: 1 00:25:33.403 Number of Namespaces: 0 00:25:33.403 Compare Command: Not Supported 00:25:33.403 Write Uncorrectable Command: Not Supported 00:25:33.403 Dataset Management Command: Not Supported 00:25:33.403 Write Zeroes Command: Not Supported 00:25:33.403 Set Features Save Field: Not Supported 00:25:33.403 Reservations: Not Supported 00:25:33.403 Timestamp: Not Supported 00:25:33.403 Copy: Not Supported 00:25:33.403 Volatile Write Cache: Not Present 00:25:33.403 Atomic Write Unit (Normal): 1 00:25:33.403 Atomic Write Unit (PFail): 1 00:25:33.403 Atomic Compare & Write Unit: 1 00:25:33.403 Fused Compare & Write: Supported 00:25:33.403 Scatter-Gather List 00:25:33.403 SGL Command Set: Supported 00:25:33.403 SGL Keyed: Supported 00:25:33.403 SGL Bit Bucket Descriptor: Not Supported 00:25:33.403 SGL Metadata Pointer: Not Supported 00:25:33.403 Oversized SGL: Not Supported 00:25:33.403 SGL Metadata Address: Not Supported 00:25:33.403 SGL Offset: Supported 00:25:33.403 Transport SGL Data Block: Not Supported 00:25:33.403 Replay Protected Memory Block: Not Supported 00:25:33.403 00:25:33.403 Firmware Slot Information 00:25:33.403 ========================= 00:25:33.403 Active slot: 0 00:25:33.403 00:25:33.403 00:25:33.403 Error Log 00:25:33.403 ========= 00:25:33.403 00:25:33.403 Active Namespaces 00:25:33.403 ================= 00:25:33.403 Discovery Log Page 00:25:33.403 ================== 00:25:33.403 Generation Counter: 2 00:25:33.403 Number of Records: 2 00:25:33.403 Record Format: 0 00:25:33.403 00:25:33.403 Discovery Log Entry 0 00:25:33.403 ---------------------- 00:25:33.403 Transport Type: 3 (TCP) 00:25:33.403 Address Family: 1 (IPv4) 00:25:33.403 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:33.403 Entry Flags: 00:25:33.403 Duplicate Returned Information: 1 00:25:33.403 Explicit Persistent Connection Support for Discovery: 1 00:25:33.403 Transport Requirements: 00:25:33.403 Secure Channel: Not Required 00:25:33.403 Port ID: 0 (0x0000) 00:25:33.403 Controller ID: 65535 (0xffff) 00:25:33.403 Admin Max SQ Size: 128 00:25:33.403 Transport Service Identifier: 4420 00:25:33.403 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:33.403 Transport Address: 10.0.0.2 00:25:33.403 Discovery Log Entry 1 00:25:33.403 ---------------------- 00:25:33.403 Transport Type: 3 (TCP) 00:25:33.403 Address Family: 1 (IPv4) 00:25:33.403 Subsystem Type: 2 (NVM Subsystem) 00:25:33.403 Entry Flags: 00:25:33.403 Duplicate Returned Information: 0 00:25:33.403 Explicit Persistent Connection Support for Discovery: 0 00:25:33.403 Transport Requirements: 00:25:33.403 Secure Channel: Not Required 00:25:33.403 Port ID: 0 (0x0000) 00:25:33.403 Controller ID: 65535 (0xffff) 00:25:33.403 Admin Max SQ Size: 128 00:25:33.403 Transport Service Identifier: 4420 00:25:33.403 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:33.403 Transport Address: 10.0.0.2 [2024-05-15 19:40:59.557629] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:33.403 [2024-05-15 19:40:59.557642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.403 [2024-05-15 19:40:59.557649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.403 [2024-05-15 19:40:59.557655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.403 [2024-05-15 19:40:59.557661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.403 [2024-05-15 19:40:59.557670] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.403 [2024-05-15 19:40:59.557674] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.403 [2024-05-15 19:40:59.557677] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c3c30) 00:25:33.403 [2024-05-15 19:40:59.557684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.403 [2024-05-15 19:40:59.557698] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182bda0, cid 3, qid 0 00:25:33.403 [2024-05-15 19:40:59.557836] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.403 [2024-05-15 19:40:59.557843] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.403 [2024-05-15 19:40:59.557846] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.403 [2024-05-15 19:40:59.557850] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182bda0) on tqpair=0x17c3c30 00:25:33.403 [2024-05-15 19:40:59.557857] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.403 [2024-05-15 19:40:59.557861] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.403 [2024-05-15 19:40:59.557864] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c3c30) 00:25:33.403 [2024-05-15 19:40:59.557871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.403 [2024-05-15 19:40:59.557883] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182bda0, cid 3, qid 0 00:25:33.404 [2024-05-15 19:40:59.558136] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.404 [2024-05-15 19:40:59.558143] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.404 [2024-05-15 19:40:59.558146] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.404 [2024-05-15 19:40:59.558152] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182bda0) on tqpair=0x17c3c30 00:25:33.404 [2024-05-15 19:40:59.558157] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:33.404 [2024-05-15 19:40:59.558162] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:33.404 [2024-05-15 19:40:59.558171] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.404 [2024-05-15 19:40:59.558175] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.404 [2024-05-15 19:40:59.558179] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c3c30) 00:25:33.404 [2024-05-15 19:40:59.558185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.404 [2024-05-15 19:40:59.558195] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182bda0, cid 3, qid 0 00:25:33.404 [2024-05-15 19:40:59.558418] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.404 [2024-05-15 19:40:59.558425] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.404 [2024-05-15 19:40:59.558428] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.404 [2024-05-15 19:40:59.558432] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182bda0) on tqpair=0x17c3c30 00:25:33.404 [2024-05-15 19:40:59.558442] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.404 [2024-05-15 19:40:59.558446] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.404 [2024-05-15 19:40:59.558449] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c3c30) 00:25:33.404 [2024-05-15 19:40:59.558456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.404 [2024-05-15 19:40:59.558466] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182bda0, cid 3, qid 0 00:25:33.404 [2024-05-15 19:40:59.558740] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.404 [2024-05-15 19:40:59.558746] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.404 [2024-05-15 19:40:59.558750] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.404 [2024-05-15 19:40:59.558753] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182bda0) on tqpair=0x17c3c30 00:25:33.404 [2024-05-15 19:40:59.558763] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.404 [2024-05-15 19:40:59.558767] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.404 [2024-05-15 19:40:59.558771] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c3c30) 00:25:33.404 [2024-05-15 19:40:59.558777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.404 [2024-05-15 19:40:59.558786] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182bda0, cid 3, qid 0 00:25:33.404 [2024-05-15 19:40:59.558993] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.404 [2024-05-15 19:40:59.558999] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.404 [2024-05-15 19:40:59.559003] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.404 [2024-05-15 19:40:59.559006] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182bda0) on tqpair=0x17c3c30 00:25:33.404 [2024-05-15 19:40:59.559016] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.404 [2024-05-15 19:40:59.559020] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.404 [2024-05-15 19:40:59.559023] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c3c30) 00:25:33.404 [2024-05-15 19:40:59.559030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.404 [2024-05-15 19:40:59.559039] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182bda0, cid 3, qid 0 00:25:33.404 [2024-05-15 19:40:59.559245] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.404 [2024-05-15 19:40:59.559253] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.404 [2024-05-15 19:40:59.559257] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.404 [2024-05-15 19:40:59.559260] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182bda0) on tqpair=0x17c3c30 00:25:33.404 [2024-05-15 19:40:59.559270] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.404 [2024-05-15 19:40:59.559274] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.404 [2024-05-15 19:40:59.559278] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c3c30) 00:25:33.404 [2024-05-15 19:40:59.559284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.404 [2024-05-15 19:40:59.559294] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x182bda0, cid 3, qid 0 00:25:33.404 [2024-05-15 19:40:59.563322] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.404 [2024-05-15 19:40:59.563330] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.404 [2024-05-15 19:40:59.563333] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.404 [2024-05-15 19:40:59.563337] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x182bda0) on tqpair=0x17c3c30 00:25:33.404 [2024-05-15 19:40:59.563345] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:25:33.404 00:25:33.404 19:40:59 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:33.669 [2024-05-15 19:40:59.600922] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:25:33.669 [2024-05-15 19:40:59.600964] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3702966 ] 00:25:33.669 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.669 [2024-05-15 19:40:59.633911] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:33.669 [2024-05-15 19:40:59.633954] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:33.669 [2024-05-15 19:40:59.633959] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:33.669 [2024-05-15 19:40:59.633969] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:33.669 [2024-05-15 19:40:59.633976] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:33.669 [2024-05-15 19:40:59.637341] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:33.669 [2024-05-15 19:40:59.637367] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xee3c30 0 00:25:33.669 [2024-05-15 19:40:59.645320] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:33.669 [2024-05-15 19:40:59.645334] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:33.669 [2024-05-15 19:40:59.645339] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:33.669 [2024-05-15 19:40:59.645342] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:33.669 [2024-05-15 19:40:59.645374] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.669 [2024-05-15 19:40:59.645380] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.669 [2024-05-15 19:40:59.645384] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee3c30) 00:25:33.669 [2024-05-15 19:40:59.645396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:33.669 [2024-05-15 19:40:59.645415] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4b980, cid 0, qid 0 00:25:33.669 [2024-05-15 19:40:59.653324] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.669 [2024-05-15 19:40:59.653333] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.669 [2024-05-15 19:40:59.653337] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.669 [2024-05-15 19:40:59.653341] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4b980) on tqpair=0xee3c30 00:25:33.669 [2024-05-15 19:40:59.653350] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:33.669 [2024-05-15 19:40:59.653356] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:33.669 [2024-05-15 19:40:59.653361] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:33.669 [2024-05-15 19:40:59.653372] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.669 [2024-05-15 19:40:59.653376] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.669 [2024-05-15 19:40:59.653380] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee3c30) 00:25:33.669 [2024-05-15 19:40:59.653387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.669 [2024-05-15 19:40:59.653399] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4b980, cid 0, qid 0 00:25:33.669 [2024-05-15 19:40:59.653626] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.669 [2024-05-15 19:40:59.653633] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.669 [2024-05-15 19:40:59.653636] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.669 [2024-05-15 19:40:59.653640] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4b980) on tqpair=0xee3c30 00:25:33.669 [2024-05-15 19:40:59.653645] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:33.669 [2024-05-15 19:40:59.653653] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:33.669 [2024-05-15 19:40:59.653659] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.669 [2024-05-15 19:40:59.653663] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.669 [2024-05-15 19:40:59.653666] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee3c30) 00:25:33.669 [2024-05-15 19:40:59.653673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.669 [2024-05-15 19:40:59.653683] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4b980, cid 0, qid 0 00:25:33.669 [2024-05-15 19:40:59.653889] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.669 [2024-05-15 19:40:59.653896] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.669 [2024-05-15 19:40:59.653899] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.669 [2024-05-15 19:40:59.653903] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4b980) on tqpair=0xee3c30 00:25:33.669 [2024-05-15 19:40:59.653908] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:33.669 [2024-05-15 19:40:59.653916] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:33.669 [2024-05-15 19:40:59.653922] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.669 [2024-05-15 19:40:59.653925] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.669 [2024-05-15 19:40:59.653929] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee3c30) 00:25:33.669 [2024-05-15 19:40:59.653936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.669 [2024-05-15 19:40:59.653949] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4b980, cid 0, qid 0 00:25:33.669 [2024-05-15 19:40:59.654161] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.669 [2024-05-15 19:40:59.654168] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.669 [2024-05-15 19:40:59.654171] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.669 [2024-05-15 19:40:59.654175] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4b980) on tqpair=0xee3c30 00:25:33.669 [2024-05-15 19:40:59.654179] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:33.669 [2024-05-15 19:40:59.654188] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.669 [2024-05-15 19:40:59.654192] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.654196] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee3c30) 00:25:33.670 [2024-05-15 19:40:59.654202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-05-15 19:40:59.654212] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4b980, cid 0, qid 0 00:25:33.670 [2024-05-15 19:40:59.654399] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.670 [2024-05-15 19:40:59.654406] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.670 [2024-05-15 19:40:59.654409] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.654413] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4b980) on tqpair=0xee3c30 00:25:33.670 [2024-05-15 19:40:59.654417] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:33.670 [2024-05-15 19:40:59.654422] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:33.670 [2024-05-15 19:40:59.654429] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:33.670 [2024-05-15 19:40:59.654534] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:33.670 [2024-05-15 19:40:59.654538] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:33.670 [2024-05-15 19:40:59.654545] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.654549] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.654552] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee3c30) 00:25:33.670 [2024-05-15 19:40:59.654559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-05-15 19:40:59.654569] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4b980, cid 0, qid 0 00:25:33.670 [2024-05-15 19:40:59.654764] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.670 [2024-05-15 19:40:59.654770] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.670 [2024-05-15 19:40:59.654773] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.654777] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4b980) on tqpair=0xee3c30 00:25:33.670 [2024-05-15 19:40:59.654781] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:33.670 [2024-05-15 19:40:59.654791] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.654794] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.654798] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee3c30) 00:25:33.670 [2024-05-15 19:40:59.654804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-05-15 19:40:59.654816] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4b980, cid 0, qid 0 00:25:33.670 [2024-05-15 19:40:59.654996] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.670 [2024-05-15 19:40:59.655002] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.670 [2024-05-15 19:40:59.655006] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.655009] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4b980) on tqpair=0xee3c30 00:25:33.670 [2024-05-15 19:40:59.655014] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:33.670 [2024-05-15 19:40:59.655018] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:33.670 [2024-05-15 19:40:59.655025] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:33.670 [2024-05-15 19:40:59.655037] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:33.670 [2024-05-15 19:40:59.655046] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.655049] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee3c30) 00:25:33.670 [2024-05-15 19:40:59.655056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-05-15 19:40:59.655066] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4b980, cid 0, qid 0 00:25:33.670 [2024-05-15 19:40:59.655325] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:33.670 [2024-05-15 19:40:59.655332] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:33.670 [2024-05-15 19:40:59.655336] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.655339] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xee3c30): datao=0, datal=4096, cccid=0 00:25:33.670 [2024-05-15 19:40:59.655344] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf4b980) on tqpair(0xee3c30): expected_datao=0, payload_size=4096 00:25:33.670 [2024-05-15 19:40:59.655348] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.655382] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.655387] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.701322] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.670 [2024-05-15 19:40:59.701332] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.670 [2024-05-15 19:40:59.701336] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.701339] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4b980) on tqpair=0xee3c30 00:25:33.670 [2024-05-15 19:40:59.701347] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:33.670 [2024-05-15 19:40:59.701352] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:33.670 [2024-05-15 19:40:59.701357] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:33.670 [2024-05-15 19:40:59.701361] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:33.670 [2024-05-15 19:40:59.701365] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:33.670 [2024-05-15 19:40:59.701370] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:33.670 [2024-05-15 19:40:59.701381] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:33.670 [2024-05-15 19:40:59.701389] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.701395] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.701399] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee3c30) 00:25:33.670 [2024-05-15 19:40:59.701407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:33.670 [2024-05-15 19:40:59.701419] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4b980, cid 0, qid 0 00:25:33.670 [2024-05-15 19:40:59.701603] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.670 [2024-05-15 19:40:59.701609] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.670 [2024-05-15 19:40:59.701613] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.701616] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4b980) on tqpair=0xee3c30 00:25:33.670 [2024-05-15 19:40:59.701625] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.701629] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.701633] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee3c30) 00:25:33.670 [2024-05-15 19:40:59.701639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.670 [2024-05-15 19:40:59.701645] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.701648] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.701652] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xee3c30) 00:25:33.670 [2024-05-15 19:40:59.701658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.670 [2024-05-15 19:40:59.701664] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.701667] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.701670] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xee3c30) 00:25:33.670 [2024-05-15 19:40:59.701676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.670 [2024-05-15 19:40:59.701682] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.701685] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.701689] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee3c30) 00:25:33.670 [2024-05-15 19:40:59.701694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.670 [2024-05-15 19:40:59.701699] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:33.670 [2024-05-15 19:40:59.701707] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:33.670 [2024-05-15 19:40:59.701713] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.701717] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xee3c30) 00:25:33.670 [2024-05-15 19:40:59.701723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-05-15 19:40:59.701735] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4b980, cid 0, qid 0 00:25:33.670 [2024-05-15 19:40:59.701740] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bae0, cid 1, qid 0 00:25:33.670 [2024-05-15 19:40:59.701745] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bc40, cid 2, qid 0 00:25:33.670 [2024-05-15 19:40:59.701750] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bda0, cid 3, qid 0 00:25:33.670 [2024-05-15 19:40:59.701756] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bf00, cid 4, qid 0 00:25:33.670 [2024-05-15 19:40:59.701981] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.670 [2024-05-15 19:40:59.701988] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.670 [2024-05-15 19:40:59.701991] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.670 [2024-05-15 19:40:59.701994] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bf00) on tqpair=0xee3c30 00:25:33.670 [2024-05-15 19:40:59.702001] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:33.670 [2024-05-15 19:40:59.702006] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:33.670 [2024-05-15 19:40:59.702013] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:33.670 [2024-05-15 19:40:59.702019] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:33.671 [2024-05-15 19:40:59.702026] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.702029] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.702033] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xee3c30) 00:25:33.671 [2024-05-15 19:40:59.702039] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:33.671 [2024-05-15 19:40:59.702049] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bf00, cid 4, qid 0 00:25:33.671 [2024-05-15 19:40:59.702224] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.671 [2024-05-15 19:40:59.702230] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.671 [2024-05-15 19:40:59.702233] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.702237] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bf00) on tqpair=0xee3c30 00:25:33.671 [2024-05-15 19:40:59.702289] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:33.671 [2024-05-15 19:40:59.702298] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:33.671 [2024-05-15 19:40:59.702305] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.702309] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xee3c30) 00:25:33.671 [2024-05-15 19:40:59.702321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-05-15 19:40:59.702331] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bf00, cid 4, qid 0 00:25:33.671 [2024-05-15 19:40:59.702580] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:33.671 [2024-05-15 19:40:59.702586] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:33.671 [2024-05-15 19:40:59.702590] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.702593] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xee3c30): datao=0, datal=4096, cccid=4 00:25:33.671 [2024-05-15 19:40:59.702598] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf4bf00) on tqpair(0xee3c30): expected_datao=0, payload_size=4096 00:25:33.671 [2024-05-15 19:40:59.702602] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.702609] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.702612] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.743500] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.671 [2024-05-15 19:40:59.743510] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.671 [2024-05-15 19:40:59.743516] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.743521] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bf00) on tqpair=0xee3c30 00:25:33.671 [2024-05-15 19:40:59.743532] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:33.671 [2024-05-15 19:40:59.743543] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:33.671 [2024-05-15 19:40:59.743552] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:33.671 [2024-05-15 19:40:59.743559] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.743563] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xee3c30) 00:25:33.671 [2024-05-15 19:40:59.743570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-05-15 19:40:59.743581] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bf00, cid 4, qid 0 00:25:33.671 [2024-05-15 19:40:59.743782] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:33.671 [2024-05-15 19:40:59.743789] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:33.671 [2024-05-15 19:40:59.743792] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.743796] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xee3c30): datao=0, datal=4096, cccid=4 00:25:33.671 [2024-05-15 19:40:59.743800] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf4bf00) on tqpair(0xee3c30): expected_datao=0, payload_size=4096 00:25:33.671 [2024-05-15 19:40:59.743804] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.743837] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.743841] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.789320] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.671 [2024-05-15 19:40:59.789333] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.671 [2024-05-15 19:40:59.789336] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.789340] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bf00) on tqpair=0xee3c30 00:25:33.671 [2024-05-15 19:40:59.789351] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:33.671 [2024-05-15 19:40:59.789361] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:33.671 [2024-05-15 19:40:59.789369] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.789372] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xee3c30) 00:25:33.671 [2024-05-15 19:40:59.789379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-05-15 19:40:59.789392] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bf00, cid 4, qid 0 00:25:33.671 [2024-05-15 19:40:59.789600] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:33.671 [2024-05-15 19:40:59.789606] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:33.671 [2024-05-15 19:40:59.789610] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.789613] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xee3c30): datao=0, datal=4096, cccid=4 00:25:33.671 [2024-05-15 19:40:59.789618] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf4bf00) on tqpair(0xee3c30): expected_datao=0, payload_size=4096 00:25:33.671 [2024-05-15 19:40:59.789622] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.789659] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.789663] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.831489] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.671 [2024-05-15 19:40:59.831499] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.671 [2024-05-15 19:40:59.831502] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.831506] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bf00) on tqpair=0xee3c30 00:25:33.671 [2024-05-15 19:40:59.831518] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:33.671 [2024-05-15 19:40:59.831526] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:33.671 [2024-05-15 19:40:59.831533] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:33.671 [2024-05-15 19:40:59.831539] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:33.671 [2024-05-15 19:40:59.831544] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:33.671 [2024-05-15 19:40:59.831549] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:33.671 [2024-05-15 19:40:59.831553] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:33.671 [2024-05-15 19:40:59.831558] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:33.671 [2024-05-15 19:40:59.831574] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.831578] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xee3c30) 00:25:33.671 [2024-05-15 19:40:59.831586] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-05-15 19:40:59.831592] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.831596] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.831599] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xee3c30) 00:25:33.671 [2024-05-15 19:40:59.831605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.671 [2024-05-15 19:40:59.831619] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bf00, cid 4, qid 0 00:25:33.671 [2024-05-15 19:40:59.831624] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4c060, cid 5, qid 0 00:25:33.671 [2024-05-15 19:40:59.831822] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.671 [2024-05-15 19:40:59.831828] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.671 [2024-05-15 19:40:59.831831] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.831835] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bf00) on tqpair=0xee3c30 00:25:33.671 [2024-05-15 19:40:59.831841] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.671 [2024-05-15 19:40:59.831847] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.671 [2024-05-15 19:40:59.831850] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.831854] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4c060) on tqpair=0xee3c30 00:25:33.671 [2024-05-15 19:40:59.831863] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.831867] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xee3c30) 00:25:33.671 [2024-05-15 19:40:59.831873] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-05-15 19:40:59.831885] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4c060, cid 5, qid 0 00:25:33.671 [2024-05-15 19:40:59.832083] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.671 [2024-05-15 19:40:59.832089] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.671 [2024-05-15 19:40:59.832092] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.832096] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4c060) on tqpair=0xee3c30 00:25:33.671 [2024-05-15 19:40:59.832105] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.671 [2024-05-15 19:40:59.832108] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xee3c30) 00:25:33.671 [2024-05-15 19:40:59.832115] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-05-15 19:40:59.832124] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4c060, cid 5, qid 0 00:25:33.671 [2024-05-15 19:40:59.836323] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.671 [2024-05-15 19:40:59.836332] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.671 [2024-05-15 19:40:59.836335] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.836339] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4c060) on tqpair=0xee3c30 00:25:33.672 [2024-05-15 19:40:59.836348] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.836352] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xee3c30) 00:25:33.672 [2024-05-15 19:40:59.836358] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-05-15 19:40:59.836370] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4c060, cid 5, qid 0 00:25:33.672 [2024-05-15 19:40:59.836560] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.672 [2024-05-15 19:40:59.836566] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.672 [2024-05-15 19:40:59.836570] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.836573] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4c060) on tqpair=0xee3c30 00:25:33.672 [2024-05-15 19:40:59.836585] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.836589] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xee3c30) 00:25:33.672 [2024-05-15 19:40:59.836595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-05-15 19:40:59.836602] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.836606] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xee3c30) 00:25:33.672 [2024-05-15 19:40:59.836612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-05-15 19:40:59.836619] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.836623] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xee3c30) 00:25:33.672 [2024-05-15 19:40:59.836629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-05-15 19:40:59.836639] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.836643] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xee3c30) 00:25:33.672 [2024-05-15 19:40:59.836649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-05-15 19:40:59.836662] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4c060, cid 5, qid 0 00:25:33.672 [2024-05-15 19:40:59.836668] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bf00, cid 4, qid 0 00:25:33.672 [2024-05-15 19:40:59.836672] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4c1c0, cid 6, qid 0 00:25:33.672 [2024-05-15 19:40:59.836677] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4c320, cid 7, qid 0 00:25:33.672 [2024-05-15 19:40:59.836946] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:33.672 [2024-05-15 19:40:59.836953] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:33.672 [2024-05-15 19:40:59.836956] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.836960] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xee3c30): datao=0, datal=8192, cccid=5 00:25:33.672 [2024-05-15 19:40:59.836964] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf4c060) on tqpair(0xee3c30): expected_datao=0, payload_size=8192 00:25:33.672 [2024-05-15 19:40:59.836968] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.837068] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.837072] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.837078] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:33.672 [2024-05-15 19:40:59.837084] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:33.672 [2024-05-15 19:40:59.837087] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.837090] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xee3c30): datao=0, datal=512, cccid=4 00:25:33.672 [2024-05-15 19:40:59.837095] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf4bf00) on tqpair(0xee3c30): expected_datao=0, payload_size=512 00:25:33.672 [2024-05-15 19:40:59.837099] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.837105] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.837108] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.837114] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:33.672 [2024-05-15 19:40:59.837120] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:33.672 [2024-05-15 19:40:59.837123] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.837126] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xee3c30): datao=0, datal=512, cccid=6 00:25:33.672 [2024-05-15 19:40:59.837130] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf4c1c0) on tqpair(0xee3c30): expected_datao=0, payload_size=512 00:25:33.672 [2024-05-15 19:40:59.837134] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.837141] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.837144] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.837150] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:33.672 [2024-05-15 19:40:59.837155] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:33.672 [2024-05-15 19:40:59.837158] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.837162] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xee3c30): datao=0, datal=4096, cccid=7 00:25:33.672 [2024-05-15 19:40:59.837166] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf4c320) on tqpair(0xee3c30): expected_datao=0, payload_size=4096 00:25:33.672 [2024-05-15 19:40:59.837170] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.837176] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.837180] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.837213] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.672 [2024-05-15 19:40:59.837221] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.672 [2024-05-15 19:40:59.837224] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.837228] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4c060) on tqpair=0xee3c30 00:25:33.672 [2024-05-15 19:40:59.837240] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.672 [2024-05-15 19:40:59.837246] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.672 [2024-05-15 19:40:59.837249] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.837253] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bf00) on tqpair=0xee3c30 00:25:33.672 [2024-05-15 19:40:59.837261] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.672 [2024-05-15 19:40:59.837267] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.672 [2024-05-15 19:40:59.837270] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.837274] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4c1c0) on tqpair=0xee3c30 00:25:33.672 [2024-05-15 19:40:59.837282] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.672 [2024-05-15 19:40:59.837288] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.672 [2024-05-15 19:40:59.837292] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.672 [2024-05-15 19:40:59.837295] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4c320) on tqpair=0xee3c30 00:25:33.672 ===================================================== 00:25:33.672 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:33.672 ===================================================== 00:25:33.672 Controller Capabilities/Features 00:25:33.672 ================================ 00:25:33.672 Vendor ID: 8086 00:25:33.672 Subsystem Vendor ID: 8086 00:25:33.672 Serial Number: SPDK00000000000001 00:25:33.672 Model Number: SPDK bdev Controller 00:25:33.672 Firmware Version: 24.05 00:25:33.672 Recommended Arb Burst: 6 00:25:33.672 IEEE OUI Identifier: e4 d2 5c 00:25:33.672 Multi-path I/O 00:25:33.672 May have multiple subsystem ports: Yes 00:25:33.672 May have multiple controllers: Yes 00:25:33.672 Associated with SR-IOV VF: No 00:25:33.672 Max Data Transfer Size: 131072 00:25:33.672 Max Number of Namespaces: 32 00:25:33.672 Max Number of I/O Queues: 127 00:25:33.672 NVMe Specification Version (VS): 1.3 00:25:33.672 NVMe Specification Version (Identify): 1.3 00:25:33.672 Maximum Queue Entries: 128 00:25:33.672 Contiguous Queues Required: Yes 00:25:33.672 Arbitration Mechanisms Supported 00:25:33.672 Weighted Round Robin: Not Supported 00:25:33.672 Vendor Specific: Not Supported 00:25:33.672 Reset Timeout: 15000 ms 00:25:33.672 Doorbell Stride: 4 bytes 00:25:33.672 NVM Subsystem Reset: Not Supported 00:25:33.672 Command Sets Supported 00:25:33.672 NVM Command Set: Supported 00:25:33.672 Boot Partition: Not Supported 00:25:33.672 Memory Page Size Minimum: 4096 bytes 00:25:33.672 Memory Page Size Maximum: 4096 bytes 00:25:33.672 Persistent Memory Region: Not Supported 00:25:33.672 Optional Asynchronous Events Supported 00:25:33.672 Namespace Attribute Notices: Supported 00:25:33.672 Firmware Activation Notices: Not Supported 00:25:33.672 ANA Change Notices: Not Supported 00:25:33.672 PLE Aggregate Log Change Notices: Not Supported 00:25:33.672 LBA Status Info Alert Notices: Not Supported 00:25:33.672 EGE Aggregate Log Change Notices: Not Supported 00:25:33.672 Normal NVM Subsystem Shutdown event: Not Supported 00:25:33.672 Zone Descriptor Change Notices: Not Supported 00:25:33.672 Discovery Log Change Notices: Not Supported 00:25:33.672 Controller Attributes 00:25:33.672 128-bit Host Identifier: Supported 00:25:33.672 Non-Operational Permissive Mode: Not Supported 00:25:33.672 NVM Sets: Not Supported 00:25:33.672 Read Recovery Levels: Not Supported 00:25:33.672 Endurance Groups: Not Supported 00:25:33.672 Predictable Latency Mode: Not Supported 00:25:33.672 Traffic Based Keep ALive: Not Supported 00:25:33.672 Namespace Granularity: Not Supported 00:25:33.672 SQ Associations: Not Supported 00:25:33.672 UUID List: Not Supported 00:25:33.672 Multi-Domain Subsystem: Not Supported 00:25:33.672 Fixed Capacity Management: Not Supported 00:25:33.672 Variable Capacity Management: Not Supported 00:25:33.672 Delete Endurance Group: Not Supported 00:25:33.672 Delete NVM Set: Not Supported 00:25:33.672 Extended LBA Formats Supported: Not Supported 00:25:33.672 Flexible Data Placement Supported: Not Supported 00:25:33.672 00:25:33.672 Controller Memory Buffer Support 00:25:33.672 ================================ 00:25:33.673 Supported: No 00:25:33.673 00:25:33.673 Persistent Memory Region Support 00:25:33.673 ================================ 00:25:33.673 Supported: No 00:25:33.673 00:25:33.673 Admin Command Set Attributes 00:25:33.673 ============================ 00:25:33.673 Security Send/Receive: Not Supported 00:25:33.673 Format NVM: Not Supported 00:25:33.673 Firmware Activate/Download: Not Supported 00:25:33.673 Namespace Management: Not Supported 00:25:33.673 Device Self-Test: Not Supported 00:25:33.673 Directives: Not Supported 00:25:33.673 NVMe-MI: Not Supported 00:25:33.673 Virtualization Management: Not Supported 00:25:33.673 Doorbell Buffer Config: Not Supported 00:25:33.673 Get LBA Status Capability: Not Supported 00:25:33.673 Command & Feature Lockdown Capability: Not Supported 00:25:33.673 Abort Command Limit: 4 00:25:33.673 Async Event Request Limit: 4 00:25:33.673 Number of Firmware Slots: N/A 00:25:33.673 Firmware Slot 1 Read-Only: N/A 00:25:33.673 Firmware Activation Without Reset: N/A 00:25:33.673 Multiple Update Detection Support: N/A 00:25:33.673 Firmware Update Granularity: No Information Provided 00:25:33.673 Per-Namespace SMART Log: No 00:25:33.673 Asymmetric Namespace Access Log Page: Not Supported 00:25:33.673 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:33.673 Command Effects Log Page: Supported 00:25:33.673 Get Log Page Extended Data: Supported 00:25:33.673 Telemetry Log Pages: Not Supported 00:25:33.673 Persistent Event Log Pages: Not Supported 00:25:33.673 Supported Log Pages Log Page: May Support 00:25:33.673 Commands Supported & Effects Log Page: Not Supported 00:25:33.673 Feature Identifiers & Effects Log Page:May Support 00:25:33.673 NVMe-MI Commands & Effects Log Page: May Support 00:25:33.673 Data Area 4 for Telemetry Log: Not Supported 00:25:33.673 Error Log Page Entries Supported: 128 00:25:33.673 Keep Alive: Supported 00:25:33.673 Keep Alive Granularity: 10000 ms 00:25:33.673 00:25:33.673 NVM Command Set Attributes 00:25:33.673 ========================== 00:25:33.673 Submission Queue Entry Size 00:25:33.673 Max: 64 00:25:33.673 Min: 64 00:25:33.673 Completion Queue Entry Size 00:25:33.673 Max: 16 00:25:33.673 Min: 16 00:25:33.673 Number of Namespaces: 32 00:25:33.673 Compare Command: Supported 00:25:33.673 Write Uncorrectable Command: Not Supported 00:25:33.673 Dataset Management Command: Supported 00:25:33.673 Write Zeroes Command: Supported 00:25:33.673 Set Features Save Field: Not Supported 00:25:33.673 Reservations: Supported 00:25:33.673 Timestamp: Not Supported 00:25:33.673 Copy: Supported 00:25:33.673 Volatile Write Cache: Present 00:25:33.673 Atomic Write Unit (Normal): 1 00:25:33.673 Atomic Write Unit (PFail): 1 00:25:33.673 Atomic Compare & Write Unit: 1 00:25:33.673 Fused Compare & Write: Supported 00:25:33.673 Scatter-Gather List 00:25:33.673 SGL Command Set: Supported 00:25:33.673 SGL Keyed: Supported 00:25:33.673 SGL Bit Bucket Descriptor: Not Supported 00:25:33.673 SGL Metadata Pointer: Not Supported 00:25:33.673 Oversized SGL: Not Supported 00:25:33.673 SGL Metadata Address: Not Supported 00:25:33.673 SGL Offset: Supported 00:25:33.673 Transport SGL Data Block: Not Supported 00:25:33.673 Replay Protected Memory Block: Not Supported 00:25:33.673 00:25:33.673 Firmware Slot Information 00:25:33.673 ========================= 00:25:33.673 Active slot: 1 00:25:33.673 Slot 1 Firmware Revision: 24.05 00:25:33.673 00:25:33.673 00:25:33.673 Commands Supported and Effects 00:25:33.673 ============================== 00:25:33.673 Admin Commands 00:25:33.673 -------------- 00:25:33.673 Get Log Page (02h): Supported 00:25:33.673 Identify (06h): Supported 00:25:33.673 Abort (08h): Supported 00:25:33.673 Set Features (09h): Supported 00:25:33.673 Get Features (0Ah): Supported 00:25:33.673 Asynchronous Event Request (0Ch): Supported 00:25:33.673 Keep Alive (18h): Supported 00:25:33.673 I/O Commands 00:25:33.673 ------------ 00:25:33.673 Flush (00h): Supported LBA-Change 00:25:33.673 Write (01h): Supported LBA-Change 00:25:33.673 Read (02h): Supported 00:25:33.673 Compare (05h): Supported 00:25:33.673 Write Zeroes (08h): Supported LBA-Change 00:25:33.673 Dataset Management (09h): Supported LBA-Change 00:25:33.673 Copy (19h): Supported LBA-Change 00:25:33.673 Unknown (79h): Supported LBA-Change 00:25:33.673 Unknown (7Ah): Supported 00:25:33.673 00:25:33.673 Error Log 00:25:33.673 ========= 00:25:33.673 00:25:33.673 Arbitration 00:25:33.673 =========== 00:25:33.673 Arbitration Burst: 1 00:25:33.673 00:25:33.673 Power Management 00:25:33.673 ================ 00:25:33.673 Number of Power States: 1 00:25:33.673 Current Power State: Power State #0 00:25:33.673 Power State #0: 00:25:33.673 Max Power: 0.00 W 00:25:33.673 Non-Operational State: Operational 00:25:33.673 Entry Latency: Not Reported 00:25:33.673 Exit Latency: Not Reported 00:25:33.673 Relative Read Throughput: 0 00:25:33.673 Relative Read Latency: 0 00:25:33.673 Relative Write Throughput: 0 00:25:33.673 Relative Write Latency: 0 00:25:33.673 Idle Power: Not Reported 00:25:33.673 Active Power: Not Reported 00:25:33.673 Non-Operational Permissive Mode: Not Supported 00:25:33.673 00:25:33.673 Health Information 00:25:33.673 ================== 00:25:33.673 Critical Warnings: 00:25:33.673 Available Spare Space: OK 00:25:33.673 Temperature: OK 00:25:33.673 Device Reliability: OK 00:25:33.673 Read Only: No 00:25:33.673 Volatile Memory Backup: OK 00:25:33.673 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:33.673 Temperature Threshold: [2024-05-15 19:40:59.837405] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.673 [2024-05-15 19:40:59.837411] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xee3c30) 00:25:33.673 [2024-05-15 19:40:59.837418] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.673 [2024-05-15 19:40:59.837429] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4c320, cid 7, qid 0 00:25:33.673 [2024-05-15 19:40:59.837635] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.673 [2024-05-15 19:40:59.837641] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.673 [2024-05-15 19:40:59.837644] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.673 [2024-05-15 19:40:59.837648] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4c320) on tqpair=0xee3c30 00:25:33.673 [2024-05-15 19:40:59.837676] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:33.673 [2024-05-15 19:40:59.837687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-05-15 19:40:59.837694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-05-15 19:40:59.837700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-05-15 19:40:59.837706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-05-15 19:40:59.837713] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.673 [2024-05-15 19:40:59.837717] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.837720] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee3c30) 00:25:33.674 [2024-05-15 19:40:59.837727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.674 [2024-05-15 19:40:59.837738] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bda0, cid 3, qid 0 00:25:33.674 [2024-05-15 19:40:59.837929] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.674 [2024-05-15 19:40:59.837935] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.674 [2024-05-15 19:40:59.837939] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.837944] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bda0) on tqpair=0xee3c30 00:25:33.674 [2024-05-15 19:40:59.837951] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.837955] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.837958] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee3c30) 00:25:33.674 [2024-05-15 19:40:59.837965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.674 [2024-05-15 19:40:59.837977] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bda0, cid 3, qid 0 00:25:33.674 [2024-05-15 19:40:59.838189] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.674 [2024-05-15 19:40:59.838195] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.674 [2024-05-15 19:40:59.838199] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.838202] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bda0) on tqpair=0xee3c30 00:25:33.674 [2024-05-15 19:40:59.838207] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:33.674 [2024-05-15 19:40:59.838212] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:33.674 [2024-05-15 19:40:59.838220] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.838224] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.838228] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee3c30) 00:25:33.674 [2024-05-15 19:40:59.838234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.674 [2024-05-15 19:40:59.838244] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bda0, cid 3, qid 0 00:25:33.674 [2024-05-15 19:40:59.838465] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.674 [2024-05-15 19:40:59.838472] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.674 [2024-05-15 19:40:59.838475] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.838479] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bda0) on tqpair=0xee3c30 00:25:33.674 [2024-05-15 19:40:59.838488] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.838492] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.838496] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee3c30) 00:25:33.674 [2024-05-15 19:40:59.838502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.674 [2024-05-15 19:40:59.838512] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bda0, cid 3, qid 0 00:25:33.674 [2024-05-15 19:40:59.838765] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.674 [2024-05-15 19:40:59.838771] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.674 [2024-05-15 19:40:59.838774] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.838778] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bda0) on tqpair=0xee3c30 00:25:33.674 [2024-05-15 19:40:59.838788] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.838791] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.838795] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee3c30) 00:25:33.674 [2024-05-15 19:40:59.838801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.674 [2024-05-15 19:40:59.838811] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bda0, cid 3, qid 0 00:25:33.674 [2024-05-15 19:40:59.839030] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.674 [2024-05-15 19:40:59.839038] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.674 [2024-05-15 19:40:59.839042] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.839045] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bda0) on tqpair=0xee3c30 00:25:33.674 [2024-05-15 19:40:59.839055] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.839058] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.839062] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee3c30) 00:25:33.674 [2024-05-15 19:40:59.839068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.674 [2024-05-15 19:40:59.839078] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bda0, cid 3, qid 0 00:25:33.674 [2024-05-15 19:40:59.839246] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.674 [2024-05-15 19:40:59.839252] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.674 [2024-05-15 19:40:59.839255] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.839259] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bda0) on tqpair=0xee3c30 00:25:33.674 [2024-05-15 19:40:59.839268] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.839272] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.839275] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee3c30) 00:25:33.674 [2024-05-15 19:40:59.839282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.674 [2024-05-15 19:40:59.839291] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bda0, cid 3, qid 0 00:25:33.674 [2024-05-15 19:40:59.839516] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.674 [2024-05-15 19:40:59.839523] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.674 [2024-05-15 19:40:59.839526] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.839530] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bda0) on tqpair=0xee3c30 00:25:33.674 [2024-05-15 19:40:59.839539] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.839543] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.839546] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee3c30) 00:25:33.674 [2024-05-15 19:40:59.839553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.674 [2024-05-15 19:40:59.839563] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bda0, cid 3, qid 0 00:25:33.674 [2024-05-15 19:40:59.839754] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.674 [2024-05-15 19:40:59.839760] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.674 [2024-05-15 19:40:59.839763] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.839767] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bda0) on tqpair=0xee3c30 00:25:33.674 [2024-05-15 19:40:59.839776] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.839780] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.839783] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee3c30) 00:25:33.674 [2024-05-15 19:40:59.839790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.674 [2024-05-15 19:40:59.839799] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bda0, cid 3, qid 0 00:25:33.674 [2024-05-15 19:40:59.840023] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.674 [2024-05-15 19:40:59.840029] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.674 [2024-05-15 19:40:59.840035] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.840039] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bda0) on tqpair=0xee3c30 00:25:33.674 [2024-05-15 19:40:59.840048] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.840052] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.840055] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee3c30) 00:25:33.674 [2024-05-15 19:40:59.840062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.674 [2024-05-15 19:40:59.840071] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bda0, cid 3, qid 0 00:25:33.674 [2024-05-15 19:40:59.840236] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.674 [2024-05-15 19:40:59.840242] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.674 [2024-05-15 19:40:59.840245] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.840249] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bda0) on tqpair=0xee3c30 00:25:33.674 [2024-05-15 19:40:59.840258] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.840262] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.840265] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee3c30) 00:25:33.674 [2024-05-15 19:40:59.840272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.674 [2024-05-15 19:40:59.840281] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bda0, cid 3, qid 0 00:25:33.674 [2024-05-15 19:40:59.840512] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.674 [2024-05-15 19:40:59.840518] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.674 [2024-05-15 19:40:59.840522] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.840525] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bda0) on tqpair=0xee3c30 00:25:33.674 [2024-05-15 19:40:59.840534] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.840538] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.840542] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee3c30) 00:25:33.674 [2024-05-15 19:40:59.840548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.674 [2024-05-15 19:40:59.840558] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bda0, cid 3, qid 0 00:25:33.674 [2024-05-15 19:40:59.840779] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.674 [2024-05-15 19:40:59.840785] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.674 [2024-05-15 19:40:59.840788] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.674 [2024-05-15 19:40:59.840792] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bda0) on tqpair=0xee3c30 00:25:33.674 [2024-05-15 19:40:59.840801] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.675 [2024-05-15 19:40:59.840805] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.675 [2024-05-15 19:40:59.840808] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee3c30) 00:25:33.675 [2024-05-15 19:40:59.840815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.675 [2024-05-15 19:40:59.840824] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bda0, cid 3, qid 0 00:25:33.675 [2024-05-15 19:40:59.840995] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.675 [2024-05-15 19:40:59.841002] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.675 [2024-05-15 19:40:59.841005] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.675 [2024-05-15 19:40:59.841010] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bda0) on tqpair=0xee3c30 00:25:33.675 [2024-05-15 19:40:59.841020] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.675 [2024-05-15 19:40:59.841024] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.675 [2024-05-15 19:40:59.841027] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee3c30) 00:25:33.675 [2024-05-15 19:40:59.841033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.675 [2024-05-15 19:40:59.841043] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bda0, cid 3, qid 0 00:25:33.675 [2024-05-15 19:40:59.841231] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.675 [2024-05-15 19:40:59.841238] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.675 [2024-05-15 19:40:59.841241] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.675 [2024-05-15 19:40:59.841245] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bda0) on tqpair=0xee3c30 00:25:33.675 [2024-05-15 19:40:59.841254] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.675 [2024-05-15 19:40:59.841258] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.675 [2024-05-15 19:40:59.841261] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee3c30) 00:25:33.675 [2024-05-15 19:40:59.841268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.675 [2024-05-15 19:40:59.841277] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf4bda0, cid 3, qid 0 00:25:33.675 [2024-05-15 19:40:59.845322] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.675 [2024-05-15 19:40:59.845330] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.675 [2024-05-15 19:40:59.845334] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.675 [2024-05-15 19:40:59.845338] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf4bda0) on tqpair=0xee3c30 00:25:33.675 [2024-05-15 19:40:59.845345] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:25:33.936 0 Kelvin (-273 Celsius) 00:25:33.936 Available Spare: 0% 00:25:33.936 Available Spare Threshold: 0% 00:25:33.936 Life Percentage Used: 0% 00:25:33.936 Data Units Read: 0 00:25:33.936 Data Units Written: 0 00:25:33.936 Host Read Commands: 0 00:25:33.936 Host Write Commands: 0 00:25:33.936 Controller Busy Time: 0 minutes 00:25:33.936 Power Cycles: 0 00:25:33.936 Power On Hours: 0 hours 00:25:33.936 Unsafe Shutdowns: 0 00:25:33.936 Unrecoverable Media Errors: 0 00:25:33.936 Lifetime Error Log Entries: 0 00:25:33.936 Warning Temperature Time: 0 minutes 00:25:33.936 Critical Temperature Time: 0 minutes 00:25:33.936 00:25:33.936 Number of Queues 00:25:33.936 ================ 00:25:33.936 Number of I/O Submission Queues: 127 00:25:33.936 Number of I/O Completion Queues: 127 00:25:33.936 00:25:33.936 Active Namespaces 00:25:33.936 ================= 00:25:33.936 Namespace ID:1 00:25:33.936 Error Recovery Timeout: Unlimited 00:25:33.936 Command Set Identifier: NVM (00h) 00:25:33.936 Deallocate: Supported 00:25:33.937 Deallocated/Unwritten Error: Not Supported 00:25:33.937 Deallocated Read Value: Unknown 00:25:33.937 Deallocate in Write Zeroes: Not Supported 00:25:33.937 Deallocated Guard Field: 0xFFFF 00:25:33.937 Flush: Supported 00:25:33.937 Reservation: Supported 00:25:33.937 Namespace Sharing Capabilities: Multiple Controllers 00:25:33.937 Size (in LBAs): 131072 (0GiB) 00:25:33.937 Capacity (in LBAs): 131072 (0GiB) 00:25:33.937 Utilization (in LBAs): 131072 (0GiB) 00:25:33.937 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:33.937 EUI64: ABCDEF0123456789 00:25:33.937 UUID: 58861591-8b28-4d0c-a98b-5fe44a7433b7 00:25:33.937 Thin Provisioning: Not Supported 00:25:33.937 Per-NS Atomic Units: Yes 00:25:33.937 Atomic Boundary Size (Normal): 0 00:25:33.937 Atomic Boundary Size (PFail): 0 00:25:33.937 Atomic Boundary Offset: 0 00:25:33.937 Maximum Single Source Range Length: 65535 00:25:33.937 Maximum Copy Length: 65535 00:25:33.937 Maximum Source Range Count: 1 00:25:33.937 NGUID/EUI64 Never Reused: No 00:25:33.937 Namespace Write Protected: No 00:25:33.937 Number of LBA Formats: 1 00:25:33.937 Current LBA Format: LBA Format #00 00:25:33.937 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:33.937 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:33.937 rmmod nvme_tcp 00:25:33.937 rmmod nvme_fabrics 00:25:33.937 rmmod nvme_keyring 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3702657 ']' 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3702657 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 3702657 ']' 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 3702657 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3702657 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3702657' 00:25:33.937 killing process with pid 3702657 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 3702657 00:25:33.937 [2024-05-15 19:40:59.992631] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:33.937 19:40:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 3702657 00:25:34.200 19:41:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:34.200 19:41:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:34.200 19:41:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:34.200 19:41:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:34.200 19:41:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:34.200 19:41:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.200 19:41:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.200 19:41:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.116 19:41:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:36.116 00:25:36.116 real 0m12.398s 00:25:36.116 user 0m8.815s 00:25:36.116 sys 0m6.680s 00:25:36.116 19:41:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:36.116 19:41:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:36.116 ************************************ 00:25:36.116 END TEST nvmf_identify 00:25:36.116 ************************************ 00:25:36.116 19:41:02 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:36.116 19:41:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:36.116 19:41:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:36.116 19:41:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:36.377 ************************************ 00:25:36.377 START TEST nvmf_perf 00:25:36.377 ************************************ 00:25:36.377 19:41:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:36.377 * Looking for test storage... 00:25:36.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:36.378 19:41:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:44.524 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:44.524 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:44.524 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:44.524 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:44.524 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:44.524 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:44.524 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:44.524 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:25:44.524 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:44.524 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:25:44.524 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:25:44.524 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:25:44.524 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:25:44.524 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:44.786 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:44.786 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:44.786 Found net devices under 0000:31:00.0: cvl_0_0 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:44.786 Found net devices under 0000:31:00.1: cvl_0_1 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:44.786 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.048 19:41:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:45.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:25:45.048 00:25:45.048 --- 10.0.0.2 ping statistics --- 00:25:45.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.048 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:25:45.048 00:25:45.048 --- 10.0.0.1 ping statistics --- 00:25:45.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.048 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3707682 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3707682 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 3707682 ']' 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:45.048 19:41:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:45.048 [2024-05-15 19:41:11.163598] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:25:45.048 [2024-05-15 19:41:11.163681] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.048 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.309 [2024-05-15 19:41:11.258496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:45.309 [2024-05-15 19:41:11.355243] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.309 [2024-05-15 19:41:11.355303] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.309 [2024-05-15 19:41:11.355311] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:45.309 [2024-05-15 19:41:11.355329] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:45.309 [2024-05-15 19:41:11.355336] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.309 [2024-05-15 19:41:11.355469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.309 [2024-05-15 19:41:11.355731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:45.309 [2024-05-15 19:41:11.355899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:45.309 [2024-05-15 19:41:11.355901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.881 19:41:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:45.881 19:41:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:25:45.881 19:41:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:45.881 19:41:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:45.881 19:41:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:46.142 19:41:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.142 19:41:12 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:46.142 19:41:12 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:46.714 19:41:12 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:46.714 19:41:12 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:46.714 19:41:12 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:25:46.714 19:41:12 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:46.973 19:41:13 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:46.973 19:41:13 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:25:46.973 19:41:13 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:46.973 19:41:13 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:46.973 19:41:13 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:47.234 [2024-05-15 19:41:13.242883] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.234 19:41:13 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:47.494 19:41:13 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:47.494 19:41:13 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:47.755 19:41:13 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:47.755 19:41:13 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:47.755 19:41:13 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:48.016 [2024-05-15 19:41:14.113848] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:48.016 [2024-05-15 19:41:14.114074] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.016 19:41:14 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:48.277 19:41:14 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:48.277 19:41:14 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:48.277 19:41:14 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:48.277 19:41:14 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:49.663 Initializing NVMe Controllers 00:25:49.663 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:25:49.663 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:49.663 Initialization complete. Launching workers. 00:25:49.663 ======================================================== 00:25:49.663 Latency(us) 00:25:49.663 Device Information : IOPS MiB/s Average min max 00:25:49.663 PCIE (0000:65:00.0) NSID 1 from core 0: 79222.94 309.46 403.42 13.37 7199.19 00:25:49.663 ======================================================== 00:25:49.663 Total : 79222.94 309.46 403.42 13.37 7199.19 00:25:49.663 00:25:49.663 19:41:15 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:49.663 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.045 Initializing NVMe Controllers 00:25:51.045 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:51.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:51.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:51.045 Initialization complete. Launching workers. 00:25:51.046 ======================================================== 00:25:51.046 Latency(us) 00:25:51.046 Device Information : IOPS MiB/s Average min max 00:25:51.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 93.00 0.36 10923.91 390.79 45728.37 00:25:51.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15236.37 6983.88 48884.21 00:25:51.046 ======================================================== 00:25:51.046 Total : 159.00 0.62 12713.99 390.79 48884.21 00:25:51.046 00:25:51.046 19:41:16 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:51.046 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.429 Initializing NVMe Controllers 00:25:52.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:52.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:52.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:52.429 Initialization complete. Launching workers. 00:25:52.430 ======================================================== 00:25:52.430 Latency(us) 00:25:52.430 Device Information : IOPS MiB/s Average min max 00:25:52.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8523.98 33.30 3755.19 497.59 8953.31 00:25:52.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3822.99 14.93 8429.86 6946.66 16919.89 00:25:52.430 ======================================================== 00:25:52.430 Total : 12346.98 48.23 5202.61 497.59 16919.89 00:25:52.430 00:25:52.430 19:41:18 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:52.430 19:41:18 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:52.430 19:41:18 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:52.430 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.984 Initializing NVMe Controllers 00:25:54.984 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:54.984 Controller IO queue size 128, less than required. 00:25:54.984 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:54.984 Controller IO queue size 128, less than required. 00:25:54.984 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:54.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:54.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:54.984 Initialization complete. Launching workers. 00:25:54.984 ======================================================== 00:25:54.984 Latency(us) 00:25:54.984 Device Information : IOPS MiB/s Average min max 00:25:54.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 967.87 241.97 134600.82 72147.76 199375.33 00:25:54.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 597.61 149.40 232281.11 63188.07 383594.31 00:25:54.984 ======================================================== 00:25:54.984 Total : 1565.49 391.37 171889.57 63188.07 383594.31 00:25:54.984 00:25:54.984 19:41:20 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:54.984 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.984 No valid NVMe controllers or AIO or URING devices found 00:25:54.984 Initializing NVMe Controllers 00:25:54.984 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:54.984 Controller IO queue size 128, less than required. 00:25:54.984 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:54.984 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:54.984 Controller IO queue size 128, less than required. 00:25:54.984 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:54.984 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:54.984 WARNING: Some requested NVMe devices were skipped 00:25:54.985 19:41:21 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:54.985 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.527 Initializing NVMe Controllers 00:25:57.527 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:57.527 Controller IO queue size 128, less than required. 00:25:57.527 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:57.527 Controller IO queue size 128, less than required. 00:25:57.527 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:57.527 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:57.527 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:57.527 Initialization complete. Launching workers. 00:25:57.527 00:25:57.527 ==================== 00:25:57.527 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:57.527 TCP transport: 00:25:57.527 polls: 38870 00:25:57.527 idle_polls: 16995 00:25:57.527 sock_completions: 21875 00:25:57.527 nvme_completions: 3927 00:25:57.527 submitted_requests: 5890 00:25:57.527 queued_requests: 1 00:25:57.527 00:25:57.527 ==================== 00:25:57.527 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:57.527 TCP transport: 00:25:57.527 polls: 38677 00:25:57.527 idle_polls: 15445 00:25:57.527 sock_completions: 23232 00:25:57.527 nvme_completions: 4063 00:25:57.528 submitted_requests: 6120 00:25:57.528 queued_requests: 1 00:25:57.528 ======================================================== 00:25:57.528 Latency(us) 00:25:57.528 Device Information : IOPS MiB/s Average min max 00:25:57.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 981.50 245.37 133861.14 66498.62 210051.62 00:25:57.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1015.50 253.87 128103.99 49172.96 173000.23 00:25:57.528 ======================================================== 00:25:57.528 Total : 1997.00 499.25 130933.56 49172.96 210051.62 00:25:57.528 00:25:57.528 19:41:23 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:57.528 19:41:23 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:57.788 rmmod nvme_tcp 00:25:57.788 rmmod nvme_fabrics 00:25:57.788 rmmod nvme_keyring 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3707682 ']' 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3707682 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 3707682 ']' 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 3707682 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3707682 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3707682' 00:25:57.788 killing process with pid 3707682 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 3707682 00:25:57.788 [2024-05-15 19:41:23.938531] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:57.788 19:41:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 3707682 00:26:00.332 19:41:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:00.332 19:41:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:00.332 19:41:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:00.332 19:41:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:00.332 19:41:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:00.332 19:41:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.332 19:41:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:00.332 19:41:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.247 19:41:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:02.247 00:26:02.247 real 0m25.678s 00:26:02.247 user 1m0.690s 00:26:02.247 sys 0m8.970s 00:26:02.247 19:41:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:02.247 19:41:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:02.247 ************************************ 00:26:02.247 END TEST nvmf_perf 00:26:02.247 ************************************ 00:26:02.247 19:41:28 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:02.247 19:41:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:02.247 19:41:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:02.247 19:41:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:02.247 ************************************ 00:26:02.247 START TEST nvmf_fio_host 00:26:02.247 ************************************ 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:02.247 * Looking for test storage... 00:26:02.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.247 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.248 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.248 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:02.248 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:02.248 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:02.248 19:41:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:26:02.248 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:02.248 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.248 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:02.248 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:02.248 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:02.248 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.248 19:41:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:02.248 19:41:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.248 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:02.248 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:02.248 19:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:26:02.248 19:41:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:10.438 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:10.439 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:10.439 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:10.439 Found net devices under 0000:31:00.0: cvl_0_0 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:10.439 Found net devices under 0000:31:00.1: cvl_0_1 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:10.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:10.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:26:10.439 00:26:10.439 --- 10.0.0.2 ping statistics --- 00:26:10.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.439 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:10.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:10.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:26:10.439 00:26:10.439 --- 10.0.0.1 ping statistics --- 00:26:10.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.439 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=3715102 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 3715102 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 3715102 ']' 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:10.439 19:41:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.701 [2024-05-15 19:41:36.668175] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:26:10.701 [2024-05-15 19:41:36.668241] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:10.701 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.701 [2024-05-15 19:41:36.763517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:10.701 [2024-05-15 19:41:36.861530] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:10.701 [2024-05-15 19:41:36.861587] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:10.701 [2024-05-15 19:41:36.861595] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:10.701 [2024-05-15 19:41:36.861602] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:10.701 [2024-05-15 19:41:36.861608] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:10.701 [2024-05-15 19:41:36.861753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.701 [2024-05-15 19:41:36.861901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:10.701 [2024-05-15 19:41:36.862070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.701 [2024-05-15 19:41:36.862072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:11.647 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:11.647 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:26:11.647 19:41:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:11.647 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.647 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.647 [2024-05-15 19:41:37.560155] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:11.647 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.647 19:41:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:26:11.647 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:11.647 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.647 19:41:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.648 Malloc1 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.648 [2024-05-15 19:41:37.659390] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:11.648 [2024-05-15 19:41:37.659608] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:11.648 19:41:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:11.909 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:11.909 fio-3.35 00:26:11.909 Starting 1 thread 00:26:12.169 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.715 00:26:14.715 test: (groupid=0, jobs=1): err= 0: pid=3715629: Wed May 15 19:41:40 2024 00:26:14.715 read: IOPS=9758, BW=38.1MiB/s (40.0MB/s)(76.5MiB/2006msec) 00:26:14.715 slat (usec): min=2, max=275, avg= 2.28, stdev= 2.72 00:26:14.715 clat (usec): min=3470, max=12485, avg=7241.36, stdev=533.89 00:26:14.715 lat (usec): min=3501, max=12487, avg=7243.64, stdev=533.73 00:26:14.715 clat percentiles (usec): 00:26:14.715 | 1.00th=[ 5997], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 6849], 00:26:14.715 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7373], 00:26:14.715 | 70.00th=[ 7504], 80.00th=[ 7635], 90.00th=[ 7898], 95.00th=[ 8029], 00:26:14.715 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[10683], 99.95th=[11731], 00:26:14.715 | 99.99th=[12387] 00:26:14.715 bw ( KiB/s): min=38192, max=39616, per=99.95%, avg=39014.00, stdev=634.02, samples=4 00:26:14.715 iops : min= 9548, max= 9904, avg=9753.50, stdev=158.50, samples=4 00:26:14.715 write: IOPS=9767, BW=38.2MiB/s (40.0MB/s)(76.5MiB/2006msec); 0 zone resets 00:26:14.715 slat (usec): min=2, max=250, avg= 2.38, stdev= 2.00 00:26:14.715 clat (usec): min=2798, max=11535, avg=5802.22, stdev=456.58 00:26:14.715 lat (usec): min=2817, max=11537, avg=5804.60, stdev=456.50 00:26:14.715 clat percentiles (usec): 00:26:14.715 | 1.00th=[ 4752], 5.00th=[ 5080], 10.00th=[ 5276], 20.00th=[ 5473], 00:26:14.715 | 30.00th=[ 5604], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5932], 00:26:14.715 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6325], 95.00th=[ 6456], 00:26:14.715 | 99.00th=[ 6783], 99.50th=[ 6915], 99.90th=[ 9503], 99.95th=[10290], 00:26:14.715 | 99.99th=[10945] 00:26:14.715 bw ( KiB/s): min=38720, max=39528, per=100.00%, avg=39072.00, stdev=336.57, samples=4 00:26:14.715 iops : min= 9680, max= 9882, avg=9768.00, stdev=84.14, samples=4 00:26:14.715 lat (msec) : 4=0.07%, 10=99.81%, 20=0.13% 00:26:14.715 cpu : usr=64.64%, sys=30.57%, ctx=79, majf=0, minf=5 00:26:14.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:14.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:14.715 issued rwts: total=19576,19594,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:14.715 00:26:14.715 Run status group 0 (all jobs): 00:26:14.715 READ: bw=38.1MiB/s (40.0MB/s), 38.1MiB/s-38.1MiB/s (40.0MB/s-40.0MB/s), io=76.5MiB (80.2MB), run=2006-2006msec 00:26:14.715 WRITE: bw=38.2MiB/s (40.0MB/s), 38.2MiB/s-38.2MiB/s (40.0MB/s-40.0MB/s), io=76.5MiB (80.3MB), run=2006-2006msec 00:26:14.715 19:41:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:14.715 19:41:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:14.715 19:41:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:26:14.715 19:41:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:14.715 19:41:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:26:14.715 19:41:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:14.715 19:41:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:26:14.716 19:41:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:26:14.716 19:41:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:14.716 19:41:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:14.716 19:41:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:26:14.716 19:41:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:14.716 19:41:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:14.716 19:41:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:14.716 19:41:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:26:14.716 19:41:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:14.716 19:41:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:26:14.716 19:41:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:26:14.716 19:41:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:26:14.716 19:41:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:26:14.716 19:41:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:14.716 19:41:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:14.977 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:14.977 fio-3.35 00:26:14.977 Starting 1 thread 00:26:14.977 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.526 00:26:17.526 test: (groupid=0, jobs=1): err= 0: pid=3716462: Wed May 15 19:41:43 2024 00:26:17.526 read: IOPS=8938, BW=140MiB/s (146MB/s)(280MiB/2006msec) 00:26:17.526 slat (usec): min=3, max=108, avg= 3.71, stdev= 1.61 00:26:17.526 clat (usec): min=1607, max=21731, avg=8941.36, stdev=2266.61 00:26:17.526 lat (usec): min=1611, max=21735, avg=8945.07, stdev=2266.82 00:26:17.526 clat percentiles (usec): 00:26:17.526 | 1.00th=[ 4752], 5.00th=[ 5604], 10.00th=[ 6128], 20.00th=[ 6915], 00:26:17.526 | 30.00th=[ 7570], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9241], 00:26:17.526 | 70.00th=[ 9896], 80.00th=[11076], 90.00th=[12256], 95.00th=[12518], 00:26:17.526 | 99.00th=[14746], 99.50th=[15795], 99.90th=[16581], 99.95th=[16909], 00:26:17.526 | 99.99th=[19006] 00:26:17.526 bw ( KiB/s): min=62432, max=82432, per=49.61%, avg=70952.00, stdev=9390.04, samples=4 00:26:17.526 iops : min= 3902, max= 5152, avg=4434.50, stdev=586.88, samples=4 00:26:17.526 write: IOPS=5330, BW=83.3MiB/s (87.3MB/s)(144MiB/1734msec); 0 zone resets 00:26:17.526 slat (usec): min=40, max=355, avg=41.25, stdev= 7.86 00:26:17.526 clat (usec): min=3291, max=15731, avg=9570.17, stdev=1616.85 00:26:17.526 lat (usec): min=3332, max=15869, avg=9611.42, stdev=1618.83 00:26:17.526 clat percentiles (usec): 00:26:17.526 | 1.00th=[ 6325], 5.00th=[ 7308], 10.00th=[ 7701], 20.00th=[ 8160], 00:26:17.526 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:26:17.526 | 70.00th=[10159], 80.00th=[10814], 90.00th=[11863], 95.00th=[12518], 00:26:17.526 | 99.00th=[13829], 99.50th=[14484], 99.90th=[15008], 99.95th=[15270], 00:26:17.526 | 99.99th=[15795] 00:26:17.526 bw ( KiB/s): min=66080, max=85504, per=86.70%, avg=73944.00, stdev=9262.64, samples=4 00:26:17.526 iops : min= 4130, max= 5344, avg=4621.50, stdev=578.91, samples=4 00:26:17.526 lat (msec) : 2=0.01%, 4=0.12%, 10=68.59%, 20=31.27%, 50=0.01% 00:26:17.526 cpu : usr=82.50%, sys=14.51%, ctx=12, majf=0, minf=14 00:26:17.526 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:17.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:17.526 issued rwts: total=17930,9243,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.526 00:26:17.526 Run status group 0 (all jobs): 00:26:17.526 READ: bw=140MiB/s (146MB/s), 140MiB/s-140MiB/s (146MB/s-146MB/s), io=280MiB (294MB), run=2006-2006msec 00:26:17.526 WRITE: bw=83.3MiB/s (87.3MB/s), 83.3MiB/s-83.3MiB/s (87.3MB/s-87.3MB/s), io=144MiB (151MB), run=1734-1734msec 00:26:17.526 19:41:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:17.526 19:41:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.526 19:41:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.526 19:41:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.526 19:41:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:26:17.526 19:41:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:26:17.526 19:41:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:17.527 rmmod nvme_tcp 00:26:17.527 rmmod nvme_fabrics 00:26:17.527 rmmod nvme_keyring 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3715102 ']' 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3715102 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 3715102 ']' 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 3715102 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3715102 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3715102' 00:26:17.527 killing process with pid 3715102 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 3715102 00:26:17.527 [2024-05-15 19:41:43.441650] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 3715102 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:17.527 19:41:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.076 19:41:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:20.076 00:26:20.076 real 0m17.574s 00:26:20.076 user 0m56.808s 00:26:20.076 sys 0m8.126s 00:26:20.076 19:41:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:20.076 19:41:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.076 ************************************ 00:26:20.076 END TEST nvmf_fio_host 00:26:20.076 ************************************ 00:26:20.076 19:41:45 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:20.076 19:41:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:20.076 19:41:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:20.076 19:41:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:20.076 ************************************ 00:26:20.076 START TEST nvmf_failover 00:26:20.076 ************************************ 00:26:20.076 19:41:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:20.076 * Looking for test storage... 00:26:20.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:20.076 19:41:45 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:20.076 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:20.076 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.076 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.076 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.076 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.076 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.076 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.076 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.076 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:26:20.077 19:41:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:28.224 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.224 19:41:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.224 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:28.225 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:28.225 Found net devices under 0000:31:00.0: cvl_0_0 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:28.225 Found net devices under 0000:31:00.1: cvl_0_1 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:28.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:26:28.225 00:26:28.225 --- 10.0.0.2 ping statistics --- 00:26:28.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.225 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:26:28.225 00:26:28.225 --- 10.0.0.1 ping statistics --- 00:26:28.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.225 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3721476 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3721476 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3721476 ']' 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:28.225 19:41:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:28.485 [2024-05-15 19:41:54.426022] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:26:28.485 [2024-05-15 19:41:54.426067] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.485 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.485 [2024-05-15 19:41:54.500465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:28.485 [2024-05-15 19:41:54.564595] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.485 [2024-05-15 19:41:54.564633] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.485 [2024-05-15 19:41:54.564641] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.485 [2024-05-15 19:41:54.564647] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.486 [2024-05-15 19:41:54.564653] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.486 [2024-05-15 19:41:54.564703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:28.486 [2024-05-15 19:41:54.564833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.486 [2024-05-15 19:41:54.564834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:28.486 19:41:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:28.486 19:41:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:26:28.486 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:28.486 19:41:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:28.486 19:41:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:28.746 19:41:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.746 19:41:54 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:28.746 [2024-05-15 19:41:54.883595] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.746 19:41:54 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:29.007 Malloc0 00:26:29.007 19:41:55 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:29.268 19:41:55 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:29.529 19:41:55 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.790 [2024-05-15 19:41:55.771068] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:29.790 [2024-05-15 19:41:55.771344] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.790 19:41:55 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:30.051 [2024-05-15 19:41:55.987856] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:30.051 19:41:56 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:30.051 [2024-05-15 19:41:56.204565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:30.051 19:41:56 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3721833 00:26:30.052 19:41:56 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:30.052 19:41:56 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:30.052 19:41:56 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3721833 /var/tmp/bdevperf.sock 00:26:30.052 19:41:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3721833 ']' 00:26:30.052 19:41:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:30.052 19:41:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:30.052 19:41:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:30.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:30.313 19:41:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:30.313 19:41:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:31.255 19:41:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:31.255 19:41:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:26:31.255 19:41:57 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:31.255 NVMe0n1 00:26:31.515 19:41:57 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:31.776 00:26:31.776 19:41:57 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3722172 00:26:31.776 19:41:57 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:31.776 19:41:57 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:33.161 19:41:58 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:33.161 [2024-05-15 19:41:59.101645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f3820 is same with the state(5) to be set 00:26:33.161 [2024-05-15 19:41:59.101700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f3820 is same with the state(5) to be set 00:26:33.161 [2024-05-15 19:41:59.101707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f3820 is same with the state(5) to be set 00:26:33.161 [2024-05-15 19:41:59.101712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f3820 is same with the state(5) to be set 00:26:33.161 [2024-05-15 19:41:59.101716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f3820 is same with the state(5) to be set 00:26:33.161 [2024-05-15 19:41:59.101721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f3820 is same with the state(5) to be set 00:26:33.161 [2024-05-15 19:41:59.101725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f3820 is same with the state(5) to be set 00:26:33.161 [2024-05-15 19:41:59.101730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f3820 is same with the state(5) to be set 00:26:33.161 [2024-05-15 19:41:59.101734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f3820 is same with the state(5) to be set 00:26:33.161 [2024-05-15 19:41:59.101738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f3820 is same with the state(5) to be set 00:26:33.161 [2024-05-15 19:41:59.101743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f3820 is same with the state(5) to be set 00:26:33.161 [2024-05-15 19:41:59.101747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f3820 is same with the state(5) to be set 00:26:33.161 [2024-05-15 19:41:59.101751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f3820 is same with the state(5) to be set 00:26:33.161 [2024-05-15 19:41:59.101756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f3820 is same with the state(5) to be set 00:26:33.161 [2024-05-15 19:41:59.101764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f3820 is same with the state(5) to be set 00:26:33.161 [2024-05-15 19:41:59.101769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f3820 is same with the state(5) to be set 00:26:33.161 [2024-05-15 19:41:59.101773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f3820 is same with the state(5) to be set 00:26:33.161 [2024-05-15 19:41:59.101777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f3820 is same with the state(5) to be set 00:26:33.161 [2024-05-15 19:41:59.101782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f3820 is same with the state(5) to be set 00:26:33.161 19:41:59 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:36.473 19:42:02 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:36.473 00:26:36.473 19:42:02 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:36.734 [2024-05-15 19:42:02.764055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 [2024-05-15 19:42:02.764264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f4f70 is same with the state(5) to be set 00:26:36.734 19:42:02 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:40.034 19:42:05 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:40.034 [2024-05-15 19:42:05.990308] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.034 19:42:06 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:40.975 19:42:07 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:41.236 [2024-05-15 19:42:07.221270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.236 [2024-05-15 19:42:07.221782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.237 [2024-05-15 19:42:07.221788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.237 [2024-05-15 19:42:07.221794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.237 [2024-05-15 19:42:07.221801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.237 [2024-05-15 19:42:07.221808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.237 [2024-05-15 19:42:07.221814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.237 [2024-05-15 19:42:07.221820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.237 [2024-05-15 19:42:07.221827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.237 [2024-05-15 19:42:07.221833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.237 [2024-05-15 19:42:07.221839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.237 [2024-05-15 19:42:07.221846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.237 [2024-05-15 19:42:07.221853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.237 [2024-05-15 19:42:07.221859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.237 [2024-05-15 19:42:07.221865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.237 [2024-05-15 19:42:07.221872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.237 [2024-05-15 19:42:07.221878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.237 [2024-05-15 19:42:07.221884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.237 [2024-05-15 19:42:07.221890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.237 [2024-05-15 19:42:07.221896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.237 [2024-05-15 19:42:07.221902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f5af0 is same with the state(5) to be set 00:26:41.237 19:42:07 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3722172 00:26:47.839 0 00:26:47.839 19:42:13 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3721833 00:26:47.839 19:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3721833 ']' 00:26:47.839 19:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3721833 00:26:47.839 19:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:26:47.839 19:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:47.839 19:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3721833 00:26:47.839 19:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:47.839 19:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:47.839 19:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3721833' 00:26:47.839 killing process with pid 3721833 00:26:47.839 19:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3721833 00:26:47.839 19:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3721833 00:26:47.839 19:42:13 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:47.839 [2024-05-15 19:41:56.279637] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:26:47.839 [2024-05-15 19:41:56.279690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3721833 ] 00:26:47.839 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.839 [2024-05-15 19:41:56.362852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.839 [2024-05-15 19:41:56.427389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.839 Running I/O for 15 seconds... 00:26:47.839 [2024-05-15 19:41:59.104716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.104753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.104771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.104781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.104791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.104798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.104808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.104815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.104825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.104832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.104841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.104848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.104857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.104864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.104874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.104881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.104890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.104897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.104906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.104913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.104923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.104930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.104944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:105344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.104952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.104961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:105352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.104968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.104977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.104984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.104993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.105000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.105009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.105016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.105025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.105032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.105040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.105047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.105056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:105400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.105063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.105072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.105079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.105088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.105095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.105104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.105111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.105119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:105432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.105126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.105135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.105144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.105154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.105161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.105170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.105176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.105185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:105464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.105192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.105202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.105209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.839 [2024-05-15 19:41:59.105218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:105480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.839 [2024-05-15 19:41:59.105225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:105488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.840 [2024-05-15 19:41:59.105797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.840 [2024-05-15 19:41:59.105825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105768 len:8 PRP1 0x0 PRP2 0x0 00:26:47.840 [2024-05-15 19:41:59.105833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.840 [2024-05-15 19:41:59.105876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.840 [2024-05-15 19:41:59.105891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.840 [2024-05-15 19:41:59.105907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.840 [2024-05-15 19:41:59.105922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.840 [2024-05-15 19:41:59.105930] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399cb0 is same with the state(5) to be set 00:26:47.840 [2024-05-15 19:41:59.106077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105776 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105784 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105792 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105800 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105808 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105816 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105824 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105832 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105840 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105848 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105856 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105000 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105008 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105016 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105024 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105032 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105040 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105048 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105056 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105064 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105072 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105080 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105088 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.841 [2024-05-15 19:41:59.106701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105096 len:8 PRP1 0x0 PRP2 0x0 00:26:47.841 [2024-05-15 19:41:59.106708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.841 [2024-05-15 19:41:59.106715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.841 [2024-05-15 19:41:59.106720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.106727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105104 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.106734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.106741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.106747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.106753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105112 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.106761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.106769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.106774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.106780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105120 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.106787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.106794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.106799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.106806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105128 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.106812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.106820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.106825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.106831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105136 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.106838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.106845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.106851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.106857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105144 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.106864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.106871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.106877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.106882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105152 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.106890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.106898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.106903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.106909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105160 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.106916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.106924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.106929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.106935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105168 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.106943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.106954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.106959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.106967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105176 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.106974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.106982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.106987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.106993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105184 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.107001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.107008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.107013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.107019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105192 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.107026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.107033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.107039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.107045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105200 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.107052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.107060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.107065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.107071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105208 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.107078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.107085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.107091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.107097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105216 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.107104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.107112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.107117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.107123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105224 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.107130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.107138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.107143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.107149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105232 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.107156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.107166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.107171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.107177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105240 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.107184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.107191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.107197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.107203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105248 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.107210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.107217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.107223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.107229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105864 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.107235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.107243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.107249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.107254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105872 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.107261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.107269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.117749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.117779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105880 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.117789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.117802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.117808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.117815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105888 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.117825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.117833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.117840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.117846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105896 len:8 PRP1 0x0 PRP2 0x0 00:26:47.842 [2024-05-15 19:41:59.117853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.842 [2024-05-15 19:41:59.117861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.842 [2024-05-15 19:41:59.117867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.842 [2024-05-15 19:41:59.117873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105904 len:8 PRP1 0x0 PRP2 0x0 00:26:47.843 [2024-05-15 19:41:59.117886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.843 [2024-05-15 19:41:59.117894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.843 [2024-05-15 19:41:59.117900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.843 [2024-05-15 19:41:59.117906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105912 len:8 PRP1 0x0 PRP2 0x0 00:26:47.843 [2024-05-15 19:41:59.117913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.843 [2024-05-15 19:41:59.117921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.843 [2024-05-15 19:41:59.117927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.843 [2024-05-15 19:41:59.117934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105920 len:8 PRP1 0x0 PRP2 0x0 00:26:47.843 [2024-05-15 19:41:59.117943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.843 [2024-05-15 19:41:59.117951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.843 [2024-05-15 19:41:59.117957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.843 [2024-05-15 19:41:59.117963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105928 len:8 PRP1 0x0 PRP2 0x0 00:26:47.843 [2024-05-15 19:41:59.117971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.843 [2024-05-15 19:41:59.117978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.843 [2024-05-15 19:41:59.117984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.843 [2024-05-15 19:41:59.117990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105936 len:8 PRP1 0x0 PRP2 0x0 00:26:47.843 [2024-05-15 19:41:59.117998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.843 [2024-05-15 19:41:59.118005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.843 [2024-05-15 19:41:59.118011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.843 [2024-05-15 19:41:59.118017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105944 len:8 PRP1 0x0 PRP2 0x0 00:26:47.843 [2024-05-15 19:41:59.118025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.843 [2024-05-15 19:41:59.118034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.843 [2024-05-15 19:41:59.118040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.843 [2024-05-15 19:41:59.118047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105952 len:8 PRP1 0x0 PRP2 0x0 00:26:47.843 [2024-05-15 19:41:59.118055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.843 [2024-05-15 19:41:59.118062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.843 [2024-05-15 19:41:59.118068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.843 [2024-05-15 19:41:59.118074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105960 len:8 PRP1 0x0 PRP2 0x0 00:26:47.843 [2024-05-15 19:41:59.118082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.843 [2024-05-15 19:41:59.118089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.843 [2024-05-15 19:41:59.118095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.843 [2024-05-15 19:41:59.118102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105968 len:8 PRP1 0x0 PRP2 0x0 00:26:47.843 [2024-05-15 19:41:59.118109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.843 [2024-05-15 19:41:59.118117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.843 [2024-05-15 19:41:59.118122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.843 [2024-05-15 19:41:59.118128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105976 len:8 PRP1 0x0 PRP2 0x0 00:26:47.843 [2024-05-15 19:41:59.118135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.843 [2024-05-15 19:41:59.118143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.843 [2024-05-15 19:41:59.118148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.843 [2024-05-15 19:41:59.118154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105984 len:8 PRP1 0x0 PRP2 0x0 00:26:47.843 [2024-05-15 19:41:59.118161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.843 [2024-05-15 19:41:59.118169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.843 [2024-05-15 19:41:59.118174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.843 [2024-05-15 19:41:59.118180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105992 len:8 PRP1 0x0 PRP2 0x0 00:26:47.843 [2024-05-15 19:41:59.118187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.843 [2024-05-15 19:41:59.118194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.843 [2024-05-15 19:41:59.118200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.843 [2024-05-15 19:41:59.118205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106000 len:8 PRP1 0x0 PRP2 0x0 00:26:47.843 [2024-05-15 19:41:59.118212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.843 [2024-05-15 19:41:59.118220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.843 [2024-05-15 19:41:59.118225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.843 [2024-05-15 19:41:59.118231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106008 len:8 PRP1 0x0 PRP2 0x0 00:26:47.843 [2024-05-15 19:41:59.118238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.843 [2024-05-15 19:41:59.118245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.843 [2024-05-15 19:41:59.118251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.843 [2024-05-15 19:41:59.118256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106016 len:8 PRP1 0x0 PRP2 0x0 00:26:47.843 [2024-05-15 19:41:59.118264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.843 [2024-05-15 19:41:59.118272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.843 [2024-05-15 19:41:59.118277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.843 [2024-05-15 19:41:59.118284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105256 len:8 PRP1 0x0 PRP2 0x0 00:26:47.843 [2024-05-15 19:41:59.118291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.843 [2024-05-15 19:41:59.118298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.843 [2024-05-15 19:41:59.118305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.843 [2024-05-15 19:41:59.118310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105264 len:8 PRP1 0x0 PRP2 0x0 00:26:47.843 [2024-05-15 19:41:59.118326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.843 [2024-05-15 19:41:59.118334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.843 [2024-05-15 19:41:59.118339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.843 [2024-05-15 19:41:59.118345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105272 len:8 PRP1 0x0 PRP2 0x0 00:26:47.843 [2024-05-15 19:41:59.118352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.843 [2024-05-15 19:41:59.118359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.843 [2024-05-15 19:41:59.118365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.843 [2024-05-15 19:41:59.118370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105280 len:8 PRP1 0x0 PRP2 0x0 00:26:47.843 [2024-05-15 19:41:59.118378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.843 [2024-05-15 19:41:59.118385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.843 [2024-05-15 19:41:59.118390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.843 [2024-05-15 19:41:59.118396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105288 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105296 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105304 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105312 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105320 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105328 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105336 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105344 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105352 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105360 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105368 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105376 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105384 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105392 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105400 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105408 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105416 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105424 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105432 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105440 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105448 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105456 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105464 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.118975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.118980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.118986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105472 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.118993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.844 [2024-05-15 19:41:59.119000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.844 [2024-05-15 19:41:59.119006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.844 [2024-05-15 19:41:59.119011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105480 len:8 PRP1 0x0 PRP2 0x0 00:26:47.844 [2024-05-15 19:41:59.119018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.119026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.119031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.119037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105488 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.119044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.119051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.119056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.119062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105496 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.119069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.119077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.119082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.119088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105504 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.119095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.119104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.119110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.119116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105512 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.119123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.119130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.119135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.119141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105520 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.119148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.119155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.119161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.119167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105528 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.119174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.119181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.119186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.119192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105536 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.119199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.119206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.119211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.119217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105544 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.119224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.119232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.119237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.119243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105552 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.119249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.119257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.119262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.119268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105560 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.119275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.119283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.119288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.119294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105568 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.119302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.119309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.119323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.119329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105576 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.119337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.119344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.119349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.119355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105584 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.119362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.119369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.127180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.127207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105592 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.127218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.127230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.127236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.127242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105600 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.127249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.127257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.127263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.127269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105608 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.127276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.127283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.127288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.127294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105616 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.127301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.127308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.127322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.127328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105624 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.127335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.127343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.127348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.127362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105632 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.127369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.127377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.127382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.127388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105640 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.127395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.127402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.127408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.127414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105648 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.127420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.127427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.127433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.127439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105656 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.127446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.127453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.127458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.127464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105664 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.127471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.127478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.845 [2024-05-15 19:41:59.127483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.845 [2024-05-15 19:41:59.127489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105672 len:8 PRP1 0x0 PRP2 0x0 00:26:47.845 [2024-05-15 19:41:59.127496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.845 [2024-05-15 19:41:59.127503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.846 [2024-05-15 19:41:59.127508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.846 [2024-05-15 19:41:59.127514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105680 len:8 PRP1 0x0 PRP2 0x0 00:26:47.846 [2024-05-15 19:41:59.127521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:41:59.127529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.846 [2024-05-15 19:41:59.127535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.846 [2024-05-15 19:41:59.127541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105688 len:8 PRP1 0x0 PRP2 0x0 00:26:47.846 [2024-05-15 19:41:59.127548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:41:59.127557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.846 [2024-05-15 19:41:59.127562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.846 [2024-05-15 19:41:59.127568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105696 len:8 PRP1 0x0 PRP2 0x0 00:26:47.846 [2024-05-15 19:41:59.127575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:41:59.127582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.846 [2024-05-15 19:41:59.127588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.846 [2024-05-15 19:41:59.127594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105704 len:8 PRP1 0x0 PRP2 0x0 00:26:47.846 [2024-05-15 19:41:59.127601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:41:59.127608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.846 [2024-05-15 19:41:59.127613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.846 [2024-05-15 19:41:59.127619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105712 len:8 PRP1 0x0 PRP2 0x0 00:26:47.846 [2024-05-15 19:41:59.127626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:41:59.127633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.846 [2024-05-15 19:41:59.127638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.846 [2024-05-15 19:41:59.127644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105720 len:8 PRP1 0x0 PRP2 0x0 00:26:47.846 [2024-05-15 19:41:59.127651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:41:59.127658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.846 [2024-05-15 19:41:59.127664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.846 [2024-05-15 19:41:59.127670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105728 len:8 PRP1 0x0 PRP2 0x0 00:26:47.846 [2024-05-15 19:41:59.127676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:41:59.127684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.846 [2024-05-15 19:41:59.127689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.846 [2024-05-15 19:41:59.127695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105736 len:8 PRP1 0x0 PRP2 0x0 00:26:47.846 [2024-05-15 19:41:59.127702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:41:59.127709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.846 [2024-05-15 19:41:59.127714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.846 [2024-05-15 19:41:59.127720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105744 len:8 PRP1 0x0 PRP2 0x0 00:26:47.846 [2024-05-15 19:41:59.127727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:41:59.127735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.846 [2024-05-15 19:41:59.127740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.846 [2024-05-15 19:41:59.127746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105752 len:8 PRP1 0x0 PRP2 0x0 00:26:47.846 [2024-05-15 19:41:59.127754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:41:59.127762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.846 [2024-05-15 19:41:59.127767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.846 [2024-05-15 19:41:59.127773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105760 len:8 PRP1 0x0 PRP2 0x0 00:26:47.846 [2024-05-15 19:41:59.127780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:41:59.127787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.846 [2024-05-15 19:41:59.127793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.846 [2024-05-15 19:41:59.127799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105768 len:8 PRP1 0x0 PRP2 0x0 00:26:47.846 [2024-05-15 19:41:59.127806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:41:59.127844] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23bac20 was disconnected and freed. reset controller. 00:26:47.846 [2024-05-15 19:41:59.127859] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:47.846 [2024-05-15 19:41:59.127867] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.846 [2024-05-15 19:41:59.127908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2399cb0 (9): Bad file descriptor 00:26:47.846 [2024-05-15 19:41:59.131495] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.846 [2024-05-15 19:41:59.298911] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:47.846 [2024-05-15 19:42:02.764922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.846 [2024-05-15 19:42:02.764958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:42:02.764975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.846 [2024-05-15 19:42:02.764983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:42:02.764993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.846 [2024-05-15 19:42:02.765001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:42:02.765010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.846 [2024-05-15 19:42:02.765017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:42:02.765026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.846 [2024-05-15 19:42:02.765034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:42:02.765043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.846 [2024-05-15 19:42:02.765050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:42:02.765059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.846 [2024-05-15 19:42:02.765070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:42:02.765080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.846 [2024-05-15 19:42:02.765087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:42:02.765096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.846 [2024-05-15 19:42:02.765103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:42:02.765112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.846 [2024-05-15 19:42:02.765119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:42:02.765128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.846 [2024-05-15 19:42:02.765135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:42:02.765144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.846 [2024-05-15 19:42:02.765151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:42:02.765160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.846 [2024-05-15 19:42:02.765167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:42:02.765177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.846 [2024-05-15 19:42:02.765183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:42:02.765192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.846 [2024-05-15 19:42:02.765199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:42:02.765209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.846 [2024-05-15 19:42:02.765215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:42:02.765225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.846 [2024-05-15 19:42:02.765232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:42:02.765241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.846 [2024-05-15 19:42:02.765247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.846 [2024-05-15 19:42:02.765256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.847 [2024-05-15 19:42:02.765792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.847 [2024-05-15 19:42:02.765798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.765808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.765815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.765823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.765831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.765840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.765846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.765855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.765862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.765871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.765878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.765889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.765896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.765905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.765912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.765921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.765928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.765937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.765944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.765953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.765960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.765969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.765976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.765984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.765992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.848 [2024-05-15 19:42:02.766374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.848 [2024-05-15 19:42:02.766390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.848 [2024-05-15 19:42:02.766407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.848 [2024-05-15 19:42:02.766422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.848 [2024-05-15 19:42:02.766438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.848 [2024-05-15 19:42:02.766453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.848 [2024-05-15 19:42:02.766469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.848 [2024-05-15 19:42:02.766478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.849 [2024-05-15 19:42:02.766485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.849 [2024-05-15 19:42:02.766502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.849 [2024-05-15 19:42:02.766518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.849 [2024-05-15 19:42:02.766535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.849 [2024-05-15 19:42:02.766551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.849 [2024-05-15 19:42:02.766567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.849 [2024-05-15 19:42:02.766583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.849 [2024-05-15 19:42:02.766599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.849 [2024-05-15 19:42:02.766615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.849 [2024-05-15 19:42:02.766631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.849 [2024-05-15 19:42:02.766647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.849 [2024-05-15 19:42:02.766663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.849 [2024-05-15 19:42:02.766679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.849 [2024-05-15 19:42:02.766694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.849 [2024-05-15 19:42:02.766712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.849 [2024-05-15 19:42:02.766728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.849 [2024-05-15 19:42:02.766744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.849 [2024-05-15 19:42:02.766760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.849 [2024-05-15 19:42:02.766776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.849 [2024-05-15 19:42:02.766792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.849 [2024-05-15 19:42:02.766808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.849 [2024-05-15 19:42:02.766824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.849 [2024-05-15 19:42:02.766839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.849 [2024-05-15 19:42:02.766855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.849 [2024-05-15 19:42:02.766871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.849 [2024-05-15 19:42:02.766886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.849 [2024-05-15 19:42:02.766902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.849 [2024-05-15 19:42:02.766919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.849 [2024-05-15 19:42:02.766935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.849 [2024-05-15 19:42:02.766951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.849 [2024-05-15 19:42:02.766967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.849 [2024-05-15 19:42:02.766984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.766993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.849 [2024-05-15 19:42:02.767000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.767008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.849 [2024-05-15 19:42:02.767015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.767034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.849 [2024-05-15 19:42:02.767041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.849 [2024-05-15 19:42:02.767047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13032 len:8 PRP1 0x0 PRP2 0x0 00:26:47.849 [2024-05-15 19:42:02.767055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.767091] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23bcdf0 was disconnected and freed. reset controller. 00:26:47.849 [2024-05-15 19:42:02.767100] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:47.849 [2024-05-15 19:42:02.767118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.849 [2024-05-15 19:42:02.767126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.767134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.849 [2024-05-15 19:42:02.767142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.767149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.849 [2024-05-15 19:42:02.767156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.767166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.849 [2024-05-15 19:42:02.767173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.849 [2024-05-15 19:42:02.767181] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.849 [2024-05-15 19:42:02.770772] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.849 [2024-05-15 19:42:02.770798] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2399cb0 (9): Bad file descriptor 00:26:47.850 [2024-05-15 19:42:02.843294] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:47.850 [2024-05-15 19:42:07.224892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.850 [2024-05-15 19:42:07.224928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.224946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.224954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.224964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.224971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.224981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.224988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.224997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.850 [2024-05-15 19:42:07.225438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.850 [2024-05-15 19:42:07.225454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.850 [2024-05-15 19:42:07.225470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.850 [2024-05-15 19:42:07.225487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.850 [2024-05-15 19:42:07.225504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.850 [2024-05-15 19:42:07.225520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.850 [2024-05-15 19:42:07.225614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.850 [2024-05-15 19:42:07.225620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.851 [2024-05-15 19:42:07.225976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.225997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.851 [2024-05-15 19:42:07.226004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85208 len:8 PRP1 0x0 PRP2 0x0 00:26:47.851 [2024-05-15 19:42:07.226012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.226023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.851 [2024-05-15 19:42:07.226029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.851 [2024-05-15 19:42:07.226035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85216 len:8 PRP1 0x0 PRP2 0x0 00:26:47.851 [2024-05-15 19:42:07.226042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.226050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.851 [2024-05-15 19:42:07.226055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.851 [2024-05-15 19:42:07.226061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85224 len:8 PRP1 0x0 PRP2 0x0 00:26:47.851 [2024-05-15 19:42:07.226068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.226076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.851 [2024-05-15 19:42:07.226081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.851 [2024-05-15 19:42:07.226087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85232 len:8 PRP1 0x0 PRP2 0x0 00:26:47.851 [2024-05-15 19:42:07.226094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.226101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.851 [2024-05-15 19:42:07.226106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.851 [2024-05-15 19:42:07.226112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85240 len:8 PRP1 0x0 PRP2 0x0 00:26:47.851 [2024-05-15 19:42:07.226119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.226127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.851 [2024-05-15 19:42:07.226132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.851 [2024-05-15 19:42:07.226138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85248 len:8 PRP1 0x0 PRP2 0x0 00:26:47.851 [2024-05-15 19:42:07.226145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.226152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.851 [2024-05-15 19:42:07.226158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.851 [2024-05-15 19:42:07.226166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85256 len:8 PRP1 0x0 PRP2 0x0 00:26:47.851 [2024-05-15 19:42:07.226174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.226181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.851 [2024-05-15 19:42:07.226186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.851 [2024-05-15 19:42:07.226192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85264 len:8 PRP1 0x0 PRP2 0x0 00:26:47.851 [2024-05-15 19:42:07.226199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.226206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.851 [2024-05-15 19:42:07.226212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.851 [2024-05-15 19:42:07.226217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85272 len:8 PRP1 0x0 PRP2 0x0 00:26:47.851 [2024-05-15 19:42:07.226224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.226232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.851 [2024-05-15 19:42:07.226237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.851 [2024-05-15 19:42:07.226243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85280 len:8 PRP1 0x0 PRP2 0x0 00:26:47.851 [2024-05-15 19:42:07.226250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.226257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.851 [2024-05-15 19:42:07.226263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.851 [2024-05-15 19:42:07.226269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85288 len:8 PRP1 0x0 PRP2 0x0 00:26:47.851 [2024-05-15 19:42:07.226275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.851 [2024-05-15 19:42:07.226283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.852 [2024-05-15 19:42:07.226288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.852 [2024-05-15 19:42:07.226294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85296 len:8 PRP1 0x0 PRP2 0x0 00:26:47.852 [2024-05-15 19:42:07.226301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.852 [2024-05-15 19:42:07.226309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.852 [2024-05-15 19:42:07.226318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.852 [2024-05-15 19:42:07.226324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85304 len:8 PRP1 0x0 PRP2 0x0 00:26:47.852 [2024-05-15 19:42:07.226332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.852 [2024-05-15 19:42:07.226339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.852 [2024-05-15 19:42:07.226344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.852 [2024-05-15 19:42:07.226350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85312 len:8 PRP1 0x0 PRP2 0x0 00:26:47.852 [2024-05-15 19:42:07.226357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.852 [2024-05-15 19:42:07.226364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.852 [2024-05-15 19:42:07.226371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.852 [2024-05-15 19:42:07.226377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85320 len:8 PRP1 0x0 PRP2 0x0 00:26:47.852 [2024-05-15 19:42:07.226384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.852 [2024-05-15 19:42:07.226391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.852 [2024-05-15 19:42:07.226397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.852 [2024-05-15 19:42:07.226403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85328 len:8 PRP1 0x0 PRP2 0x0 00:26:47.852 [2024-05-15 19:42:07.226410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.852 [2024-05-15 19:42:07.226417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.852 [2024-05-15 19:42:07.226422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.852 [2024-05-15 19:42:07.226428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85336 len:8 PRP1 0x0 PRP2 0x0 00:26:47.852 [2024-05-15 19:42:07.226435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.852 [2024-05-15 19:42:07.226442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.852 [2024-05-15 19:42:07.226448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.852 [2024-05-15 19:42:07.226454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85344 len:8 PRP1 0x0 PRP2 0x0 00:26:47.852 [2024-05-15 19:42:07.226461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.852 [2024-05-15 19:42:07.226469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.852 [2024-05-15 19:42:07.226474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.852 [2024-05-15 19:42:07.226480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85352 len:8 PRP1 0x0 PRP2 0x0 00:26:47.852 [2024-05-15 19:42:07.226487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.852 [2024-05-15 19:42:07.226494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.852 [2024-05-15 19:42:07.226500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.852 [2024-05-15 19:42:07.226505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85360 len:8 PRP1 0x0 PRP2 0x0 00:26:47.852 [2024-05-15 19:42:07.226512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.852 [2024-05-15 19:42:07.226520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.852 [2024-05-15 19:42:07.226526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.852 [2024-05-15 19:42:07.226532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85368 len:8 PRP1 0x0 PRP2 0x0 00:26:47.852 [2024-05-15 19:42:07.226539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.852 [2024-05-15 19:42:07.226547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.852 [2024-05-15 19:42:07.226552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.852 [2024-05-15 19:42:07.226558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85376 len:8 PRP1 0x0 PRP2 0x0 00:26:47.852 [2024-05-15 19:42:07.226565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.852 [2024-05-15 19:42:07.226574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.852 [2024-05-15 19:42:07.226579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.852 [2024-05-15 19:42:07.226585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85384 len:8 PRP1 0x0 PRP2 0x0 00:26:47.852 [2024-05-15 19:42:07.226592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.852 [2024-05-15 19:42:07.226600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.852 [2024-05-15 19:42:07.226606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.852 [2024-05-15 19:42:07.226612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85392 len:8 PRP1 0x0 PRP2 0x0 00:26:47.852 [2024-05-15 19:42:07.226619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.852 [2024-05-15 19:42:07.226626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.852 [2024-05-15 19:42:07.226632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.852 [2024-05-15 19:42:07.226638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85400 len:8 PRP1 0x0 PRP2 0x0 00:26:47.852 [2024-05-15 19:42:07.226645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.852 [2024-05-15 19:42:07.226653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.852 [2024-05-15 19:42:07.226662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.852 [2024-05-15 19:42:07.226668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85408 len:8 PRP1 0x0 PRP2 0x0 00:26:47.852 [2024-05-15 19:42:07.226676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.852 [2024-05-15 19:42:07.226684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.852 [2024-05-15 19:42:07.226689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.852 [2024-05-15 19:42:07.226695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85416 len:8 PRP1 0x0 PRP2 0x0 00:26:47.852 [2024-05-15 19:42:07.226702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.852 [2024-05-15 19:42:07.226709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.852 [2024-05-15 19:42:07.226714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.852 [2024-05-15 19:42:07.226720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85424 len:8 PRP1 0x0 PRP2 0x0 00:26:47.852 [2024-05-15 19:42:07.226727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.852 [2024-05-15 19:42:07.226734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.852 [2024-05-15 19:42:07.226740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.852 [2024-05-15 19:42:07.226746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85432 len:8 PRP1 0x0 PRP2 0x0 00:26:47.852 [2024-05-15 19:42:07.226752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.852 [2024-05-15 19:42:07.226760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.852 [2024-05-15 19:42:07.226765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.852 [2024-05-15 19:42:07.226771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85440 len:8 PRP1 0x0 PRP2 0x0 00:26:47.852 [2024-05-15 19:42:07.226779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.852 [2024-05-15 19:42:07.226786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.852 [2024-05-15 19:42:07.226792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.852 [2024-05-15 19:42:07.226798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85448 len:8 PRP1 0x0 PRP2 0x0 00:26:47.852 [2024-05-15 19:42:07.226804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.852 [2024-05-15 19:42:07.226812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.852 [2024-05-15 19:42:07.226818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.226823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85456 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.226830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.226837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.226843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.226848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85464 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.226855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.226863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.226868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.226875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85472 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.226881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.226889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.226894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.226900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85480 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.226907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.226914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.226919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.226925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85488 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.226932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.226939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.226945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.226951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85496 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.226957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.226964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.226971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.226977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85504 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.226984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.226991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.226997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.227003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85512 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.227009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.227017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.227022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.227028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85520 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.236668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.236701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.236708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.236716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85528 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.236723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.236731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.236738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.236744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85536 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.236751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.236758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.236764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.236770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85544 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.236777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.236784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.236789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.236795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85552 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.236802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.236810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.236815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.236821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85560 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.236828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.236840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.236846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.236851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85568 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.236858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.236866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.236871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.236877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85576 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.236884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.236891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.236896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.236902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85584 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.236909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.236916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.236922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.236928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85592 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.236934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.236943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.236948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.236954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85600 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.236961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.236969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.236974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.236980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85608 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.236987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.236994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.237000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.237006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85616 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.237013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.237020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.237026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.237031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85624 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.237040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.237048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.237053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.237059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85632 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.237065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.237073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.237079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.853 [2024-05-15 19:42:07.237085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85640 len:8 PRP1 0x0 PRP2 0x0 00:26:47.853 [2024-05-15 19:42:07.237091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.853 [2024-05-15 19:42:07.237099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.853 [2024-05-15 19:42:07.237105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.854 [2024-05-15 19:42:07.237111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85648 len:8 PRP1 0x0 PRP2 0x0 00:26:47.854 [2024-05-15 19:42:07.237117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.854 [2024-05-15 19:42:07.237125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.854 [2024-05-15 19:42:07.237130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.854 [2024-05-15 19:42:07.237136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85656 len:8 PRP1 0x0 PRP2 0x0 00:26:47.854 [2024-05-15 19:42:07.237143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.854 [2024-05-15 19:42:07.237151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.854 [2024-05-15 19:42:07.237156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.854 [2024-05-15 19:42:07.237162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85664 len:8 PRP1 0x0 PRP2 0x0 00:26:47.854 [2024-05-15 19:42:07.237168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.854 [2024-05-15 19:42:07.237176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.854 [2024-05-15 19:42:07.237181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.854 [2024-05-15 19:42:07.237187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85672 len:8 PRP1 0x0 PRP2 0x0 00:26:47.854 [2024-05-15 19:42:07.237194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.854 [2024-05-15 19:42:07.237201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.854 [2024-05-15 19:42:07.237206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.854 [2024-05-15 19:42:07.237212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85680 len:8 PRP1 0x0 PRP2 0x0 00:26:47.854 [2024-05-15 19:42:07.237219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.854 [2024-05-15 19:42:07.237227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.854 [2024-05-15 19:42:07.237232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.854 [2024-05-15 19:42:07.237239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85688 len:8 PRP1 0x0 PRP2 0x0 00:26:47.854 [2024-05-15 19:42:07.237246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.854 [2024-05-15 19:42:07.237253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.854 [2024-05-15 19:42:07.237259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.854 [2024-05-15 19:42:07.237264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85696 len:8 PRP1 0x0 PRP2 0x0 00:26:47.854 [2024-05-15 19:42:07.237272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.854 [2024-05-15 19:42:07.237280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.854 [2024-05-15 19:42:07.237285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.854 [2024-05-15 19:42:07.237291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85704 len:8 PRP1 0x0 PRP2 0x0 00:26:47.854 [2024-05-15 19:42:07.237298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.854 [2024-05-15 19:42:07.237305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.854 [2024-05-15 19:42:07.237311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.854 [2024-05-15 19:42:07.237324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85712 len:8 PRP1 0x0 PRP2 0x0 00:26:47.854 [2024-05-15 19:42:07.237331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.854 [2024-05-15 19:42:07.237338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.854 [2024-05-15 19:42:07.237344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.854 [2024-05-15 19:42:07.237350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85720 len:8 PRP1 0x0 PRP2 0x0 00:26:47.854 [2024-05-15 19:42:07.237357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.854 [2024-05-15 19:42:07.237395] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23bcc30 was disconnected and freed. reset controller. 00:26:47.854 [2024-05-15 19:42:07.237406] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:47.854 [2024-05-15 19:42:07.237433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.854 [2024-05-15 19:42:07.237442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.854 [2024-05-15 19:42:07.237452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.854 [2024-05-15 19:42:07.237460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.854 [2024-05-15 19:42:07.237470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.854 [2024-05-15 19:42:07.237478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.854 [2024-05-15 19:42:07.237488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.854 [2024-05-15 19:42:07.237497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.854 [2024-05-15 19:42:07.237507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.854 [2024-05-15 19:42:07.237548] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2399cb0 (9): Bad file descriptor 00:26:47.854 [2024-05-15 19:42:07.241129] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.854 [2024-05-15 19:42:07.320598] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:47.854 00:26:47.854 Latency(us) 00:26:47.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.854 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:47.854 Verification LBA range: start 0x0 length 0x4000 00:26:47.854 NVMe0n1 : 15.01 9043.94 35.33 761.82 0.00 13024.83 791.89 29928.11 00:26:47.854 =================================================================================================================== 00:26:47.854 Total : 9043.94 35.33 761.82 0.00 13024.83 791.89 29928.11 00:26:47.854 Received shutdown signal, test time was about 15.000000 seconds 00:26:47.854 00:26:47.854 Latency(us) 00:26:47.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.854 =================================================================================================================== 00:26:47.854 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:47.854 19:42:13 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:47.854 19:42:13 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:47.854 19:42:13 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:47.854 19:42:13 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3725749 00:26:47.854 19:42:13 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3725749 /var/tmp/bdevperf.sock 00:26:47.854 19:42:13 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:47.854 19:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3725749 ']' 00:26:47.854 19:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:47.854 19:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:47.854 19:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:47.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:47.854 19:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:47.854 19:42:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:48.159 19:42:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:48.159 19:42:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:26:48.159 19:42:14 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:48.421 [2024-05-15 19:42:14.343050] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:48.421 19:42:14 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:48.421 [2024-05-15 19:42:14.563625] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:48.421 19:42:14 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:48.683 NVMe0n1 00:26:48.944 19:42:14 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:48.944 00:26:48.944 19:42:15 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:49.515 00:26:49.515 19:42:15 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:49.515 19:42:15 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:49.515 19:42:15 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:49.777 19:42:15 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:53.079 19:42:18 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:53.079 19:42:18 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:53.079 19:42:18 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:53.079 19:42:18 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3726776 00:26:53.079 19:42:18 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3726776 00:26:54.021 0 00:26:54.021 19:42:20 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:54.021 [2024-05-15 19:42:13.326543] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:26:54.021 [2024-05-15 19:42:13.326603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3725749 ] 00:26:54.021 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.021 [2024-05-15 19:42:13.408499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.021 [2024-05-15 19:42:13.471051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.021 [2024-05-15 19:42:15.761352] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:54.021 [2024-05-15 19:42:15.761395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.021 [2024-05-15 19:42:15.761406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.021 [2024-05-15 19:42:15.761416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.021 [2024-05-15 19:42:15.761423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.021 [2024-05-15 19:42:15.761431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.021 [2024-05-15 19:42:15.761438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.021 [2024-05-15 19:42:15.761446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.021 [2024-05-15 19:42:15.761453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.021 [2024-05-15 19:42:15.761460] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.021 [2024-05-15 19:42:15.761482] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.021 [2024-05-15 19:42:15.761495] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x595cb0 (9): Bad file descriptor 00:26:54.021 [2024-05-15 19:42:15.769333] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:54.021 Running I/O for 1 seconds... 00:26:54.021 00:26:54.021 Latency(us) 00:26:54.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.021 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:54.021 Verification LBA range: start 0x0 length 0x4000 00:26:54.022 NVMe0n1 : 1.01 8781.28 34.30 0.00 0.00 14509.89 2990.08 14964.05 00:26:54.022 =================================================================================================================== 00:26:54.022 Total : 8781.28 34.30 0.00 0.00 14509.89 2990.08 14964.05 00:26:54.022 19:42:20 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:54.022 19:42:20 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:54.282 19:42:20 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:54.282 19:42:20 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:54.282 19:42:20 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:54.543 19:42:20 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:54.805 19:42:20 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:58.110 19:42:23 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:58.110 19:42:23 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:58.110 19:42:24 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3725749 00:26:58.110 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3725749 ']' 00:26:58.110 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3725749 00:26:58.110 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:26:58.110 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:58.110 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3725749 00:26:58.110 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:58.110 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:58.110 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3725749' 00:26:58.110 killing process with pid 3725749 00:26:58.110 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3725749 00:26:58.110 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3725749 00:26:58.110 19:42:24 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:58.110 19:42:24 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:58.371 19:42:24 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:58.371 19:42:24 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:58.371 19:42:24 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:58.371 19:42:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:58.371 19:42:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:26:58.371 19:42:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:58.371 19:42:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:26:58.371 19:42:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:58.371 19:42:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:58.371 rmmod nvme_tcp 00:26:58.371 rmmod nvme_fabrics 00:26:58.371 rmmod nvme_keyring 00:26:58.371 19:42:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:58.371 19:42:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:26:58.371 19:42:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:26:58.372 19:42:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3721476 ']' 00:26:58.372 19:42:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3721476 00:26:58.372 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3721476 ']' 00:26:58.372 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3721476 00:26:58.372 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:26:58.372 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:58.372 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3721476 00:26:58.372 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:58.372 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:58.372 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3721476' 00:26:58.372 killing process with pid 3721476 00:26:58.372 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3721476 00:26:58.372 [2024-05-15 19:42:24.543667] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:58.372 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3721476 00:26:58.633 19:42:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:58.633 19:42:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:58.633 19:42:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:58.633 19:42:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:58.633 19:42:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:58.633 19:42:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.633 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:58.633 19:42:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.180 19:42:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:01.180 00:27:01.180 real 0m41.029s 00:27:01.180 user 2m4.118s 00:27:01.180 sys 0m9.282s 00:27:01.180 19:42:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:01.180 19:42:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:01.180 ************************************ 00:27:01.180 END TEST nvmf_failover 00:27:01.180 ************************************ 00:27:01.180 19:42:26 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:01.180 19:42:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:01.180 19:42:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:01.180 19:42:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:01.180 ************************************ 00:27:01.180 START TEST nvmf_host_discovery 00:27:01.180 ************************************ 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:01.180 * Looking for test storage... 00:27:01.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.180 19:42:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.181 19:42:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.181 19:42:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.181 19:42:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.181 19:42:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.181 19:42:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:27:01.181 19:42:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.181 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:27:01.181 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:01.181 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:01.181 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:01.181 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:01.181 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:01.181 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:01.181 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:01.181 19:42:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:01.181 19:42:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:01.181 19:42:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:01.181 19:42:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:01.181 19:42:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:01.181 19:42:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:01.181 19:42:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:01.181 19:42:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:27:01.181 19:42:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:01.181 19:42:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:01.181 19:42:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:01.181 19:42:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:01.181 19:42:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:01.181 19:42:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.181 19:42:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:01.181 19:42:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.181 19:42:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:01.181 19:42:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:01.181 19:42:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:27:01.181 19:42:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:09.324 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:09.324 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:09.324 Found net devices under 0000:31:00.0: cvl_0_0 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:09.324 Found net devices under 0000:31:00.1: cvl_0_1 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:09.324 19:42:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:09.324 19:42:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:09.324 19:42:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:09.324 19:42:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:09.324 19:42:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:09.324 19:42:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:09.324 19:42:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:09.324 19:42:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:09.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:09.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:27:09.324 00:27:09.324 --- 10.0.0.2 ping statistics --- 00:27:09.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.324 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:27:09.324 19:42:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:09.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:09.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:27:09.324 00:27:09.324 --- 10.0.0.1 ping statistics --- 00:27:09.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.324 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:27:09.324 19:42:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:09.324 19:42:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:27:09.324 19:42:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:09.324 19:42:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:09.324 19:42:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:09.324 19:42:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:09.324 19:42:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:09.325 19:42:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:09.325 19:42:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:09.325 19:42:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:09.325 19:42:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:09.325 19:42:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:09.325 19:42:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:09.325 19:42:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3732491 00:27:09.325 19:42:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3732491 00:27:09.325 19:42:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:09.325 19:42:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3732491 ']' 00:27:09.325 19:42:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.325 19:42:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:09.325 19:42:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.325 19:42:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:09.325 19:42:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:09.325 [2024-05-15 19:42:35.322895] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:27:09.325 [2024-05-15 19:42:35.322958] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.325 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.325 [2024-05-15 19:42:35.404567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.325 [2024-05-15 19:42:35.478098] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.325 [2024-05-15 19:42:35.478141] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.325 [2024-05-15 19:42:35.478149] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:09.325 [2024-05-15 19:42:35.478156] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:09.325 [2024-05-15 19:42:35.478161] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.325 [2024-05-15 19:42:35.478181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.266 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:10.266 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:27:10.266 19:42:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:10.266 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:10.266 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.266 19:42:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:10.266 19:42:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:10.266 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.266 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.267 [2024-05-15 19:42:36.197228] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.267 [2024-05-15 19:42:36.209201] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:10.267 [2024-05-15 19:42:36.209410] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.267 null0 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.267 null1 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3732812 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3732812 /tmp/host.sock 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3732812 ']' 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:10.267 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:10.267 19:42:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.267 [2024-05-15 19:42:36.296090] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:27:10.267 [2024-05-15 19:42:36.296143] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3732812 ] 00:27:10.267 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.267 [2024-05-15 19:42:36.378424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.267 [2024-05-15 19:42:36.442768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:11.209 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:11.470 [2024-05-15 19:42:37.512741] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:11.470 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:27:11.730 19:42:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:27:12.300 [2024-05-15 19:42:38.227509] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:12.300 [2024-05-15 19:42:38.227536] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:12.300 [2024-05-15 19:42:38.227551] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:12.300 [2024-05-15 19:42:38.356947] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:12.561 [2024-05-15 19:42:38.581935] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:12.561 [2024-05-15 19:42:38.581956] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:12.561 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:27:12.561 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:12.561 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:27:12.561 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:12.561 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:12.561 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:12.561 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.822 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.822 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:12.822 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.822 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.822 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:27:12.822 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:12.822 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:12.822 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:27:12.822 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:27:12.822 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:12.822 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:27:12.822 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:12.822 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:12.822 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.822 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:12.822 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.822 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:12.822 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.822 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:12.823 19:42:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.084 [2024-05-15 19:42:39.077039] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:13.084 [2024-05-15 19:42:39.078113] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:13.084 [2024-05-15 19:42:39.078139] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:13.084 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:13.085 [2024-05-15 19:42:39.208576] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:13.085 19:42:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:27:13.345 [2024-05-15 19:42:39.310369] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:13.345 [2024-05-15 19:42:39.310387] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:13.345 [2024-05-15 19:42:39.310392] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.291 [2024-05-15 19:42:40.361348] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:14.291 [2024-05-15 19:42:40.361378] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:14.291 [2024-05-15 19:42:40.363494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.291 [2024-05-15 19:42:40.363514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.291 [2024-05-15 19:42:40.363524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.291 [2024-05-15 19:42:40.363532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.291 [2024-05-15 19:42:40.363541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.291 [2024-05-15 19:42:40.363548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.291 [2024-05-15 19:42:40.363556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:14.291 [2024-05-15 19:42:40.363563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.291 [2024-05-15 19:42:40.363570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57d360 is same with the state(5) to be set 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.291 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:14.291 [2024-05-15 19:42:40.373508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57d360 (9): Bad file descriptor 00:27:14.291 [2024-05-15 19:42:40.383550] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:14.291 [2024-05-15 19:42:40.383965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-05-15 19:42:40.384360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.291 [2024-05-15 19:42:40.384382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x57d360 with addr=10.0.0.2, port=4420 00:27:14.291 [2024-05-15 19:42:40.384391] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57d360 is same with the state(5) to be set 00:27:14.291 [2024-05-15 19:42:40.384407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57d360 (9): Bad file descriptor 00:27:14.291 [2024-05-15 19:42:40.384436] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:14.291 [2024-05-15 19:42:40.384443] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:14.291 [2024-05-15 19:42:40.384452] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:14.292 [2024-05-15 19:42:40.384465] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.292 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.292 [2024-05-15 19:42:40.393608] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:14.292 [2024-05-15 19:42:40.393969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-05-15 19:42:40.394343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-05-15 19:42:40.394363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x57d360 with addr=10.0.0.2, port=4420 00:27:14.292 [2024-05-15 19:42:40.394371] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57d360 is same with the state(5) to be set 00:27:14.292 [2024-05-15 19:42:40.394385] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57d360 (9): Bad file descriptor 00:27:14.292 [2024-05-15 19:42:40.394396] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:14.292 [2024-05-15 19:42:40.394402] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:14.292 [2024-05-15 19:42:40.394409] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:14.292 [2024-05-15 19:42:40.394421] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.292 [2024-05-15 19:42:40.403658] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:14.292 [2024-05-15 19:42:40.404018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-05-15 19:42:40.404397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-05-15 19:42:40.404407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x57d360 with addr=10.0.0.2, port=4420 00:27:14.292 [2024-05-15 19:42:40.404414] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57d360 is same with the state(5) to be set 00:27:14.292 [2024-05-15 19:42:40.404425] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57d360 (9): Bad file descriptor 00:27:14.292 [2024-05-15 19:42:40.404435] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:14.292 [2024-05-15 19:42:40.404441] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:14.292 [2024-05-15 19:42:40.404448] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:14.292 [2024-05-15 19:42:40.404458] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.292 [2024-05-15 19:42:40.413708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:14.292 [2024-05-15 19:42:40.414118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-05-15 19:42:40.414494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-05-15 19:42:40.414504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x57d360 with addr=10.0.0.2, port=4420 00:27:14.292 [2024-05-15 19:42:40.414511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57d360 is same with the state(5) to be set 00:27:14.292 [2024-05-15 19:42:40.414523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57d360 (9): Bad file descriptor 00:27:14.292 [2024-05-15 19:42:40.414539] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:14.292 [2024-05-15 19:42:40.414549] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:14.292 [2024-05-15 19:42:40.414556] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:14.292 [2024-05-15 19:42:40.414567] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.292 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.292 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:27:14.292 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:14.292 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:14.292 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:27:14.292 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:27:14.292 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:14.292 [2024-05-15 19:42:40.423765] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:14.292 [2024-05-15 19:42:40.424063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:27:14.292 [2024-05-15 19:42:40.424430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-05-15 19:42:40.424440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x57d360 with addr=10.0.0.2, port=4420 00:27:14.292 [2024-05-15 19:42:40.424447] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57d360 is same with the state(5) to be set 00:27:14.292 [2024-05-15 19:42:40.424458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57d360 (9): Bad file descriptor 00:27:14.292 [2024-05-15 19:42:40.424468] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:14.292 [2024-05-15 19:42:40.424474] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:14.292 [2024-05-15 19:42:40.424480] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:14.292 [2024-05-15 19:42:40.424491] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.292 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:14.292 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:14.292 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.292 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:14.292 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.292 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:14.292 [2024-05-15 19:42:40.433928] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:14.292 [2024-05-15 19:42:40.434346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-05-15 19:42:40.434736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-05-15 19:42:40.434746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x57d360 with addr=10.0.0.2, port=4420 00:27:14.292 [2024-05-15 19:42:40.434754] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57d360 is same with the state(5) to be set 00:27:14.292 [2024-05-15 19:42:40.434765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57d360 (9): Bad file descriptor 00:27:14.292 [2024-05-15 19:42:40.434775] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:14.292 [2024-05-15 19:42:40.434785] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:14.292 [2024-05-15 19:42:40.434792] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:14.292 [2024-05-15 19:42:40.434802] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.292 [2024-05-15 19:42:40.443983] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:14.292 [2024-05-15 19:42:40.444332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-05-15 19:42:40.444593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.292 [2024-05-15 19:42:40.444613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x57d360 with addr=10.0.0.2, port=4420 00:27:14.292 [2024-05-15 19:42:40.444620] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57d360 is same with the state(5) to be set 00:27:14.292 [2024-05-15 19:42:40.444631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57d360 (9): Bad file descriptor 00:27:14.292 [2024-05-15 19:42:40.444641] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:14.292 [2024-05-15 19:42:40.444647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:14.292 [2024-05-15 19:42:40.444654] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:14.292 [2024-05-15 19:42:40.444664] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.292 [2024-05-15 19:42:40.449979] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:14.292 [2024-05-15 19:42:40.449998] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:14.292 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:14.553 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.815 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:27:14.815 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:27:14.815 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:27:14.815 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:27:14.815 19:42:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:14.815 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.815 19:42:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.756 [2024-05-15 19:42:41.765371] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:15.756 [2024-05-15 19:42:41.765389] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:15.756 [2024-05-15 19:42:41.765401] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:15.756 [2024-05-15 19:42:41.853686] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:16.018 [2024-05-15 19:42:41.960993] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:16.018 [2024-05-15 19:42:41.961023] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.018 request: 00:27:16.018 { 00:27:16.018 "name": "nvme", 00:27:16.018 "trtype": "tcp", 00:27:16.018 "traddr": "10.0.0.2", 00:27:16.018 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:16.018 "adrfam": "ipv4", 00:27:16.018 "trsvcid": "8009", 00:27:16.018 "wait_for_attach": true, 00:27:16.018 "method": "bdev_nvme_start_discovery", 00:27:16.018 "req_id": 1 00:27:16.018 } 00:27:16.018 Got JSON-RPC error response 00:27:16.018 response: 00:27:16.018 { 00:27:16.018 "code": -17, 00:27:16.018 "message": "File exists" 00:27:16.018 } 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:16.018 19:42:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.018 request: 00:27:16.018 { 00:27:16.018 "name": "nvme_second", 00:27:16.018 "trtype": "tcp", 00:27:16.018 "traddr": "10.0.0.2", 00:27:16.018 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:16.018 "adrfam": "ipv4", 00:27:16.018 "trsvcid": "8009", 00:27:16.018 "wait_for_attach": true, 00:27:16.018 "method": "bdev_nvme_start_discovery", 00:27:16.018 "req_id": 1 00:27:16.018 } 00:27:16.018 Got JSON-RPC error response 00:27:16.018 response: 00:27:16.018 { 00:27:16.018 "code": -17, 00:27:16.018 "message": "File exists" 00:27:16.018 } 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:16.018 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.279 19:42:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:16.279 19:42:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:16.279 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:16.279 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:16.279 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:16.279 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:16.279 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:16.279 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:16.279 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:16.279 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.279 19:42:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.221 [2024-05-15 19:42:43.228586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.221 [2024-05-15 19:42:43.228926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.221 [2024-05-15 19:42:43.228938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x580720 with addr=10.0.0.2, port=8010 00:27:17.221 [2024-05-15 19:42:43.228951] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:17.221 [2024-05-15 19:42:43.228959] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:17.221 [2024-05-15 19:42:43.228966] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:18.165 [2024-05-15 19:42:44.230912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-05-15 19:42:44.231277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-05-15 19:42:44.231288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59cb80 with addr=10.0.0.2, port=8010 00:27:18.165 [2024-05-15 19:42:44.231299] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:18.165 [2024-05-15 19:42:44.231306] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:18.165 [2024-05-15 19:42:44.231317] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:19.110 [2024-05-15 19:42:45.232838] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:19.110 request: 00:27:19.110 { 00:27:19.110 "name": "nvme_second", 00:27:19.110 "trtype": "tcp", 00:27:19.110 "traddr": "10.0.0.2", 00:27:19.110 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:19.110 "adrfam": "ipv4", 00:27:19.110 "trsvcid": "8010", 00:27:19.110 "attach_timeout_ms": 3000, 00:27:19.110 "method": "bdev_nvme_start_discovery", 00:27:19.110 "req_id": 1 00:27:19.110 } 00:27:19.110 Got JSON-RPC error response 00:27:19.110 response: 00:27:19.110 { 00:27:19.110 "code": -110, 00:27:19.110 "message": "Connection timed out" 00:27:19.110 } 00:27:19.110 19:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:19.110 19:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:19.110 19:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:19.110 19:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:19.110 19:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:19.110 19:42:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:19.110 19:42:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:19.110 19:42:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:19.110 19:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.110 19:42:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:19.110 19:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.110 19:42:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:19.110 19:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.110 19:42:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:19.110 19:42:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:19.110 19:42:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3732812 00:27:19.110 19:42:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:19.110 19:42:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:19.110 19:42:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:27:19.372 19:42:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:19.372 19:42:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:27:19.372 19:42:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:19.372 19:42:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:19.372 rmmod nvme_tcp 00:27:19.372 rmmod nvme_fabrics 00:27:19.372 rmmod nvme_keyring 00:27:19.372 19:42:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:19.372 19:42:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:27:19.372 19:42:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:27:19.372 19:42:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3732491 ']' 00:27:19.372 19:42:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3732491 00:27:19.372 19:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 3732491 ']' 00:27:19.372 19:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 3732491 00:27:19.372 19:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:27:19.372 19:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:19.372 19:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3732491 00:27:19.372 19:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:19.372 19:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:19.372 19:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3732491' 00:27:19.372 killing process with pid 3732491 00:27:19.372 19:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 3732491 00:27:19.372 [2024-05-15 19:42:45.436696] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:19.372 19:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 3732491 00:27:19.634 19:42:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:19.634 19:42:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:19.634 19:42:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:19.634 19:42:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:19.634 19:42:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:19.634 19:42:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.634 19:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:19.634 19:42:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.549 19:42:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:21.549 00:27:21.549 real 0m20.782s 00:27:21.549 user 0m23.512s 00:27:21.549 sys 0m7.556s 00:27:21.549 19:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:21.550 19:42:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.550 ************************************ 00:27:21.550 END TEST nvmf_host_discovery 00:27:21.550 ************************************ 00:27:21.550 19:42:47 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:21.550 19:42:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:21.550 19:42:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:21.550 19:42:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:21.811 ************************************ 00:27:21.811 START TEST nvmf_host_multipath_status 00:27:21.811 ************************************ 00:27:21.811 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:21.811 * Looking for test storage... 00:27:21.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:21.811 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:21.811 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:21.811 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:21.811 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:21.811 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:21.811 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:21.811 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:21.811 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:21.811 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:21.811 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:21.811 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:21.811 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:27:21.812 19:42:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:30.073 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:30.073 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:30.073 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:30.074 Found net devices under 0000:31:00.0: cvl_0_0 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:30.074 Found net devices under 0000:31:00.1: cvl_0_1 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:30.074 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:30.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:30.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:27:30.336 00:27:30.336 --- 10.0.0.2 ping statistics --- 00:27:30.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.336 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:30.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:30.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:27:30.336 00:27:30.336 --- 10.0.0.1 ping statistics --- 00:27:30.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.336 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3739348 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3739348 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3739348 ']' 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:30.336 19:42:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:30.598 [2024-05-15 19:42:56.555082] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:27:30.598 [2024-05-15 19:42:56.555143] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.598 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.598 [2024-05-15 19:42:56.653170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:30.598 [2024-05-15 19:42:56.749810] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.598 [2024-05-15 19:42:56.749880] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.598 [2024-05-15 19:42:56.749888] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:30.598 [2024-05-15 19:42:56.749895] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:30.598 [2024-05-15 19:42:56.749900] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.598 [2024-05-15 19:42:56.750036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.598 [2024-05-15 19:42:56.750040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.541 19:42:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:31.541 19:42:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:27:31.541 19:42:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:31.541 19:42:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:31.541 19:42:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:31.541 19:42:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.541 19:42:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3739348 00:27:31.541 19:42:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:31.541 [2024-05-15 19:42:57.649253] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.541 19:42:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:31.802 Malloc0 00:27:31.802 19:42:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:32.063 19:42:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:32.325 19:42:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:32.325 [2024-05-15 19:42:58.477934] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:32.325 [2024-05-15 19:42:58.478178] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.325 19:42:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:32.586 [2024-05-15 19:42:58.666627] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:32.586 19:42:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3739877 00:27:32.586 19:42:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:32.586 19:42:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:32.586 19:42:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3739877 /var/tmp/bdevperf.sock 00:27:32.586 19:42:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3739877 ']' 00:27:32.586 19:42:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:32.586 19:42:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:32.586 19:42:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:32.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:32.586 19:42:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:32.586 19:42:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:32.846 19:42:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:32.846 19:42:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:27:32.846 19:42:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:33.106 19:42:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:33.366 Nvme0n1 00:27:33.366 19:42:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:33.938 Nvme0n1 00:27:33.938 19:42:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:33.938 19:42:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:35.852 19:43:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:35.852 19:43:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:36.113 19:43:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:36.375 19:43:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:37.314 19:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:37.314 19:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:37.314 19:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.314 19:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:37.574 19:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:37.574 19:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:37.574 19:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.574 19:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:37.835 19:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:37.835 19:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:37.835 19:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.835 19:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:37.836 19:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:37.836 19:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:37.836 19:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.836 19:43:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:38.097 19:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.097 19:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:38.097 19:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.097 19:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:38.358 19:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.358 19:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:38.358 19:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.358 19:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:38.619 19:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.619 19:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:38.619 19:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:38.879 19:43:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:39.140 19:43:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:40.083 19:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:40.083 19:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:40.083 19:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.083 19:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:40.343 19:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:40.343 19:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:40.343 19:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.343 19:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:40.343 19:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.343 19:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:40.343 19:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.343 19:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:40.603 19:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.603 19:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:40.603 19:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.603 19:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:40.863 19:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.863 19:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:40.863 19:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.863 19:43:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:41.124 19:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:41.124 19:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:41.124 19:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:41.124 19:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:41.384 19:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:41.384 19:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:41.384 19:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:41.644 19:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:41.644 19:43:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:43.026 19:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:43.026 19:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:43.026 19:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.026 19:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:43.026 19:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.026 19:43:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:43.026 19:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.026 19:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:43.286 19:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:43.286 19:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:43.286 19:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.286 19:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:43.286 19:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.286 19:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:43.286 19:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:43.286 19:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.546 19:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.546 19:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:43.546 19:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.546 19:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:43.808 19:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.808 19:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:43.808 19:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.808 19:43:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:44.069 19:43:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:44.069 19:43:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:44.069 19:43:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:44.069 19:43:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:44.329 19:43:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:45.714 19:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:45.714 19:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:45.714 19:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.714 19:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:45.714 19:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:45.714 19:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:45.714 19:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.714 19:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:45.975 19:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:45.975 19:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:45.975 19:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.975 19:43:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:45.975 19:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:45.975 19:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:45.975 19:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.975 19:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:46.236 19:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:46.237 19:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:46.237 19:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:46.237 19:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:46.498 19:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:46.498 19:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:46.498 19:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:46.498 19:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:46.759 19:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:46.759 19:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:46.759 19:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:47.019 19:43:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:47.019 19:43:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:48.404 19:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:48.404 19:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:48.404 19:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.404 19:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:48.404 19:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:48.404 19:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:48.404 19:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.404 19:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:48.665 19:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:48.665 19:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:48.665 19:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.665 19:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:48.665 19:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.665 19:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:48.665 19:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.665 19:43:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:48.926 19:43:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.926 19:43:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:48.926 19:43:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.926 19:43:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:49.187 19:43:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:49.187 19:43:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:49.187 19:43:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:49.187 19:43:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:49.448 19:43:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:49.448 19:43:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:49.448 19:43:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:49.708 19:43:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:49.970 19:43:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:50.912 19:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:50.912 19:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:50.912 19:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.912 19:43:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:51.172 19:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:51.172 19:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:51.172 19:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:51.172 19:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:51.172 19:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:51.172 19:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:51.433 19:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:51.433 19:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:51.433 19:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:51.433 19:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:51.433 19:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:51.433 19:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:51.696 19:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:51.696 19:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:51.696 19:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:51.696 19:43:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:51.957 19:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:51.957 19:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:51.957 19:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:51.957 19:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:52.218 19:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:52.218 19:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:52.478 19:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:52.478 19:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:52.478 19:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:52.739 19:43:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:53.747 19:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:53.747 19:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:53.747 19:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.747 19:43:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:54.007 19:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:54.007 19:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:54.007 19:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:54.007 19:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:54.268 19:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:54.268 19:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:54.268 19:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:54.268 19:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:54.529 19:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:54.529 19:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:54.529 19:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:54.529 19:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:54.790 19:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:54.790 19:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:54.790 19:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:54.790 19:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:55.051 19:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.051 19:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:55.051 19:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.051 19:43:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:55.051 19:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.051 19:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:55.051 19:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:55.312 19:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:55.571 19:43:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:56.514 19:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:56.514 19:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:56.514 19:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.514 19:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:56.778 19:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:56.778 19:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:56.778 19:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.778 19:43:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:57.039 19:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:57.039 19:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:57.039 19:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.039 19:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:57.300 19:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:57.300 19:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:57.300 19:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.300 19:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:57.300 19:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:57.300 19:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:57.300 19:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.300 19:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:57.561 19:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:57.561 19:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:57.561 19:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.561 19:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:57.821 19:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:57.821 19:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:57.821 19:43:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:58.126 19:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:58.386 19:43:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:59.329 19:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:59.329 19:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:59.329 19:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.329 19:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:59.589 19:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.589 19:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:59.589 19:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.589 19:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:59.850 19:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.850 19:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:59.850 19:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.850 19:43:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:59.850 19:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.850 19:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:59.850 19:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.850 19:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:00.110 19:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.110 19:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:00.110 19:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.110 19:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:00.371 19:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.371 19:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:00.371 19:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.371 19:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:00.632 19:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.632 19:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:00.632 19:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:00.893 19:43:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:00.893 19:43:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:02.276 19:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:02.276 19:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:02.276 19:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.276 19:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:02.276 19:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.276 19:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:02.276 19:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.276 19:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:02.536 19:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:02.536 19:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:02.536 19:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.536 19:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:02.536 19:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.536 19:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:02.536 19:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.536 19:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:02.797 19:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.797 19:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:02.797 19:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.797 19:43:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:03.057 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:03.057 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:03.057 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.057 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:03.340 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:03.340 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3739877 00:28:03.340 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3739877 ']' 00:28:03.340 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3739877 00:28:03.340 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:28:03.340 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:03.340 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3739877 00:28:03.340 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:28:03.340 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:28:03.340 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3739877' 00:28:03.340 killing process with pid 3739877 00:28:03.340 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3739877 00:28:03.340 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3739877 00:28:03.340 Connection closed with partial response: 00:28:03.340 00:28:03.340 00:28:03.340 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3739877 00:28:03.340 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:03.340 [2024-05-15 19:42:58.724736] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:28:03.340 [2024-05-15 19:42:58.724797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3739877 ] 00:28:03.340 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.340 [2024-05-15 19:42:58.780719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.340 [2024-05-15 19:42:58.833041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:03.340 Running I/O for 90 seconds... 00:28:03.340 [2024-05-15 19:43:12.954474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.954511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.954529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.954535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.954546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.954551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.954562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.954566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.954577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.954582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.954592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.954597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.954607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.954613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.954623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.954628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.954838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.954846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.954856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.954861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.954872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.954884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.954895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.954902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.954912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.954919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.954930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.954936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.954946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.954952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.954961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.954966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.954977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.954982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.954992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.954997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.955007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.955013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.955023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.955029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.955039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.955044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.955054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.955060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.955071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.955076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.955088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.955093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.955103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.955108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.955118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.955123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.955133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.955138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.955148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.955153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:03.340 [2024-05-15 19:43:12.955163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.340 [2024-05-15 19:43:12.955168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.955179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.955184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.955195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.955201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.955212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.955217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.955228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.955233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.955244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.955250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.955261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.955267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.955278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.955284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.955295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.955300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.955311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.955323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.956642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.956653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.956666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.956671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.956681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.956687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.956698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.956703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.956713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.956718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.956728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.956733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.956744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.956750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.956760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.956764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.956774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.956780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.956790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.956798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.956808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.956813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.956823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.956828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.956838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.956843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.956853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.956858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.956868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.956873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.956884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.956889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.956899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.956904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.957047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.957055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.957066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.957071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.957081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.957086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.957096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.957101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.957111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.957117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.957128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.957133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.957143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.957148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.957159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.957164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.957174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.957179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.957189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.957194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.957204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.957210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.341 [2024-05-15 19:43:12.957220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.341 [2024-05-15 19:43:12.957226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.957770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.957775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.958134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.958143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.958154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.958159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:03.342 [2024-05-15 19:43:12.958170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.342 [2024-05-15 19:43:12.958175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.343 [2024-05-15 19:43:12.958319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.343 [2024-05-15 19:43:12.958335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.343 [2024-05-15 19:43:12.958351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.343 [2024-05-15 19:43:12.958366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.343 [2024-05-15 19:43:12.958381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.343 [2024-05-15 19:43:12.958397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.343 [2024-05-15 19:43:12.958412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.958988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.958998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.959003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.959014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.343 [2024-05-15 19:43:12.959019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:03.343 [2024-05-15 19:43:12.959029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.344 [2024-05-15 19:43:12.959833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:03.344 [2024-05-15 19:43:12.959842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.959847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.959859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.959865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.959875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.959880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.959891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.969667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.969691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.969711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.969726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.969742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.969757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.969772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.969789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.969803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.969819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.969834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.969849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.969864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.969879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.969895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.969912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.969927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.969942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.969957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.969972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.969987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.969993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.970003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.970007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.970017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.970024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.970035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.970040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.970050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.970055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.970066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.970072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.970082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.970088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.970098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.970103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.970113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.970118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.970128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.970133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.970143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.970149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.970159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.970164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.970174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.970179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.970189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.970194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.970204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.345 [2024-05-15 19:43:12.970209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:03.345 [2024-05-15 19:43:12.970219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.346 [2024-05-15 19:43:12.970638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.346 [2024-05-15 19:43:12.970652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.346 [2024-05-15 19:43:12.970669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.346 [2024-05-15 19:43:12.970685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.346 [2024-05-15 19:43:12.970701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.346 [2024-05-15 19:43:12.970716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.346 [2024-05-15 19:43:12.970731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.346 [2024-05-15 19:43:12.970792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:03.346 [2024-05-15 19:43:12.970802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.970808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.970818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.970823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.970834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.970839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.970850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.970856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.970866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.970872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.970882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.970888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.970898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.970903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.970914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.970919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.970930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.970935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.970945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.970950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.970960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.970965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.970975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.970980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.970990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.970995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.347 [2024-05-15 19:43:12.971383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.347 [2024-05-15 19:43:12.971389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.971399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.971404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.971414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.971419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.971429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.971436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.971446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.971451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.971461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.971466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.971477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.971482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.971492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.971497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.971507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.971512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.971523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.971528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.972416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.972429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.972442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.972448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.972459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.972465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.972475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.972482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.972492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.972497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.972508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.972516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.972527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.972532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.972542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.972547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.974052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.974061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.974072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.974077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.974087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.974093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.974103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.974108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.974118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.974125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.974135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.974141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.974151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.974156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.974166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.974172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.974182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.974187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.974197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.974202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.974216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.974222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.974232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.974237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.974247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.974252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.974263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.974268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.974279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.974284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.974294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.974300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.974311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.974321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.974464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.974473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.974484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.974489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.974499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.348 [2024-05-15 19:43:12.974504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:03.348 [2024-05-15 19:43:12.974515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.974520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.974531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.974536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.974548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.974554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.974564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.974569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.974579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.974584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.974594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.974600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.974610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.974615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.974625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.974630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.974641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.974646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.974656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.974661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.974671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.974676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.974687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.974693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.974704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.974709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.974719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.974725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.974735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.974741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.974751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.974756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.974767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.974772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.974782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.974787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.974798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.974804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.974814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.974819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.974829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.974835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.981155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.981176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.981188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.981193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.981204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.981209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.981220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.981225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.981235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.981240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.981251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.981260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.981271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.981277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.981288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.981293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.981303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.981309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.981325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.981331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.981342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.981348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.981358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.349 [2024-05-15 19:43:12.981364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:03.349 [2024-05-15 19:43:12.981375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.981380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.981390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.981395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.981406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.981411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.981421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.981427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.981437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.981443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.981453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.981458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.981470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.981475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.981485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.981491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.981502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.981507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.981517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.350 [2024-05-15 19:43:12.981522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.981533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.350 [2024-05-15 19:43:12.981538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.981548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.350 [2024-05-15 19:43:12.981554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.981565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.350 [2024-05-15 19:43:12.981571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.981581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.350 [2024-05-15 19:43:12.981586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.981597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.350 [2024-05-15 19:43:12.981602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.981612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.350 [2024-05-15 19:43:12.981618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.981628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.981634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:03.350 [2024-05-15 19:43:12.982468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.350 [2024-05-15 19:43:12.982474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.982988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.982993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.983004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.983009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.983019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.983024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.983034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.983039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.983049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.983054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.983065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.983070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.983080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.983087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:03.351 [2024-05-15 19:43:12.983097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.351 [2024-05-15 19:43:12.983102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.983112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.983117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.983127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.983132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.983142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.983148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.983159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.983164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.983174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.983179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.983190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.983195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.983205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.983211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.983221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.983226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.983237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.983243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.983257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.983264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.983941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.983955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.983967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.983972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.983983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.983988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.983998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.352 [2024-05-15 19:43:12.984539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:03.352 [2024-05-15 19:43:12.984549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.984554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.984570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.984586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.984602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.984618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.984634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.984650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.984666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.984683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.984698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.984714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.984729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.984745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.984760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.984775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.984791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.984807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.353 [2024-05-15 19:43:12.984822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.353 [2024-05-15 19:43:12.984838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.353 [2024-05-15 19:43:12.984853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.353 [2024-05-15 19:43:12.984873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.353 [2024-05-15 19:43:12.984889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.353 [2024-05-15 19:43:12.984906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.984917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.353 [2024-05-15 19:43:12.984921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.985954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.985963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.985974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.985980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.985991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.985996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.986006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.986011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.986022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.986027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.986037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.986043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.986053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.986059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.986069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.986074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.353 [2024-05-15 19:43:12.986084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.353 [2024-05-15 19:43:12.986090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.354 [2024-05-15 19:43:12.986872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:03.354 [2024-05-15 19:43:12.986882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.986888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.986898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.986903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.986914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.986920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.986930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.986936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.986946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.986952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.986962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.986967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.986978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.986984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.986994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.986999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.987009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.987015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.987026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.987031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.987042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.987047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.987057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.987063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.987073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.987078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.987089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.987094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.991332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.991353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.991749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.991760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.991773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.991779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.991790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.991795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.991806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.991811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.991822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.991827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.991837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.991844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.991857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.991863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.991874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.991880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.991890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.991895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.991906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.991912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.991922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.991928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.991938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.991944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.991955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.991960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.991970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.991976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.991986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.991992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.992002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.992007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.992018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.992023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.992034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.992039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.992050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.992057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.992067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.992073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.992083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.992088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.992098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.992104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.992114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.992120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.992130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.992135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.992145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.355 [2024-05-15 19:43:12.992151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:03.355 [2024-05-15 19:43:12.992162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.356 [2024-05-15 19:43:12.992686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.356 [2024-05-15 19:43:12.992702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.356 [2024-05-15 19:43:12.992718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.356 [2024-05-15 19:43:12.992735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.356 [2024-05-15 19:43:12.992751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.356 [2024-05-15 19:43:12.992767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.356 [2024-05-15 19:43:12.992783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:03.356 [2024-05-15 19:43:12.992810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.356 [2024-05-15 19:43:12.992815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.992825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.992831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.992842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.992848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.992859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.992864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.992875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.992880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.992890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.992896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.992906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.992911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.992922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.992927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.992937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.992943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.992953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.992958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.992968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.992974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.992984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.992990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:03.357 [2024-05-15 19:43:12.993975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.357 [2024-05-15 19:43:12.993981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.993991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.993996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.994411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.994416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.995620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.995630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.995642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.995649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.995660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.995665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.995676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.995682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.995692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.995698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.995709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.995714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.995725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.995731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.995741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.995747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.995757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.995763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.995773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.995779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.995789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.995794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.995805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.995810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.995821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.358 [2024-05-15 19:43:12.995826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:03.358 [2024-05-15 19:43:12.995836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.995843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.995855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.995861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.995872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.995877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.995888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.995893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.359 [2024-05-15 19:43:12.996598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:03.359 [2024-05-15 19:43:12.996609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.996614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.996625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.996630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.996640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.996645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.996656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.996662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.996673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.996678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.996689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.996694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.996704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.360 [2024-05-15 19:43:12.996710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.996720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.360 [2024-05-15 19:43:12.996725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.996736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.360 [2024-05-15 19:43:12.996741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.996752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.360 [2024-05-15 19:43:12.996757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.996767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.360 [2024-05-15 19:43:12.996773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.996785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.360 [2024-05-15 19:43:12.996791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.996801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.360 [2024-05-15 19:43:12.996806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.996817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.996823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.996833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.996838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.996849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.996854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.996864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.996870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.996881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.996887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:03.360 [2024-05-15 19:43:12.997650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.360 [2024-05-15 19:43:12.997655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.997665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.997671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.997681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.997687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.997698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.997703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.997713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.997719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.997729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.997734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.997745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.997750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.997761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.997767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.997778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.997783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.997795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.997800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.361 [2024-05-15 19:43:12.998739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:03.361 [2024-05-15 19:43:12.998750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.998755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.998942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.998950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.998960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.998966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.998976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.998984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.998994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:12.999903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.362 [2024-05-15 19:43:12.999909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:03.362 [2024-05-15 19:43:13.000165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.000173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.000184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.000189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.000200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.000205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.000216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.000221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.000231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.000237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.000248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.000253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.000264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.000269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.000283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.000289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.000531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.000539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.000550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.000556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.000567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.000573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.000583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.000589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.000599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.000604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.000615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.000620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.000631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.000636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.000647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.000653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.000967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.000974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.000985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.000990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.001001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.001006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.001017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.001023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.001034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.363 [2024-05-15 19:43:13.001040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.001050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.363 [2024-05-15 19:43:13.001056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.001066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.363 [2024-05-15 19:43:13.001072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.001082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.363 [2024-05-15 19:43:13.001088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.001098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.363 [2024-05-15 19:43:13.001104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.001114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.363 [2024-05-15 19:43:13.001120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.001130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.363 [2024-05-15 19:43:13.001136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.001146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.001152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.001162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.001167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.001177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.001183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.001193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.001199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.001381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.001390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.001401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.001407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.001417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.001422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.001433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.001438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.001449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.001455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.001466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.001471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:03.363 [2024-05-15 19:43:13.001481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.363 [2024-05-15 19:43:13.001487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.001497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.001503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.001578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.001585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.001598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.001603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.001613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.001619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.001630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.001636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.001647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.001653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.001666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.001672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.001682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.001688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.001698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.001704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.001902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.001909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.001921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.001926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.001937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.001943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.001953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.001959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.001969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.001974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.001985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.001991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:03.364 [2024-05-15 19:43:13.002918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.364 [2024-05-15 19:43:13.002924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.002935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.002941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.002951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.002957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.003865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.003871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.004130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.004138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.004150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.004155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.004168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.004173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.004184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.004190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.004200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.004206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.004216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.004222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.004232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.004238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.004249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.004254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.004333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.004341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.004352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.004358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.004368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.004374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:03.365 [2024-05-15 19:43:13.004384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.365 [2024-05-15 19:43:13.004390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.004400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.004406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.004417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.004422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.004433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.004439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.004451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.004456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.004656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.004664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.004674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.004680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.004691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.004696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.004707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.004712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.004723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.004729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.004739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.004744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.004755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.004761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.004772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.004777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.004883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.004891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.004901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.004907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.004917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.004924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.004935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.004941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.004951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.004957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.004968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.004973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.004983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.004989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.005000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.005006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.005319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.005328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.005339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.005345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.005355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.005361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.005372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.005377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.005388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.005393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.005404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.005409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.005420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.005425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.005438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.005443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.005816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.005823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.005835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.005840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.005851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.005856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.005868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.005873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.005884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.366 [2024-05-15 19:43:13.005889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:03.366 [2024-05-15 19:43:13.005900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.367 [2024-05-15 19:43:13.005905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.005916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.367 [2024-05-15 19:43:13.005921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.005932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.367 [2024-05-15 19:43:13.005937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.005948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.367 [2024-05-15 19:43:13.005954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.005964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.367 [2024-05-15 19:43:13.005970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.005980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.367 [2024-05-15 19:43:13.005986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.005999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.367 [2024-05-15 19:43:13.006005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.006016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.006022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.006032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.006038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.006049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.006054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.007787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.007795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.007806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.007812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.007822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.007828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.007838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.007843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.007854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.007859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.007870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.007875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.007886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.007891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.007901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.007907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.007917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.007925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.007935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.007941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.007952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.007957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.007968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.007974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.007984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.007990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.008000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.008006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.008017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.008022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.008033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.008038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.008049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.008054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.008197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.008205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.008216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.008221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.008232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.008237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.008249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.008256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.008267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.008272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.008283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.008289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.008299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.008304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.008317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.008323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.008334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.008339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.008349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.008354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.008376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.008382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.008392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.008398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.008408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.008413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.008424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.008429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.008439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.008445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.008455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.008461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:03.367 [2024-05-15 19:43:13.008474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.367 [2024-05-15 19:43:13.008480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.008954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.008959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.009327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.009336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.009347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.009353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.009363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.009369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.009379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.009384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.009395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.009401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.009411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.009417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.009429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.009435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.009445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.009450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.009461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.009467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.009477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.009484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.009494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.009500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.009510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.009516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.009526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.009531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.009542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.009547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:03.368 [2024-05-15 19:43:13.009558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.368 [2024-05-15 19:43:13.009563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.009573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.009579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.009589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.009594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.009737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.009745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.009758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.009763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.009774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.009780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.009790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.009795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.009806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.009812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.009822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.009828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.009837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.009843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.009853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.009859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.009869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.009874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.009884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.009890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.009900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.009906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.009916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.009922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.009932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.009938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.009948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.009955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.009965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.009971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.009981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.009987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.009997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.010002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.010018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.010034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.010049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.010066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.010082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.010098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.010115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.010336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.010354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.010370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.010386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.010402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.010418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.010434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.010450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.010466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.010482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.010498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.010513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.369 [2024-05-15 19:43:13.010529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.369 [2024-05-15 19:43:13.010545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.369 [2024-05-15 19:43:13.010563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.369 [2024-05-15 19:43:13.010578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:03.369 [2024-05-15 19:43:13.010589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.370 [2024-05-15 19:43:13.010595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.010730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.370 [2024-05-15 19:43:13.010737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.010748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.370 [2024-05-15 19:43:13.010753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.010764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.370 [2024-05-15 19:43:13.010769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.010779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.010785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.010795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.010800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.010863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.010870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.010882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.010887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.010898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.010903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.010913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.010920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.010932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.010938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.010948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.010954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.010964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.010970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.010980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.010986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.011822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.011828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.012014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.012021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:03.370 [2024-05-15 19:43:13.012032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.370 [2024-05-15 19:43:13.012038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.012993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.012998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.371 [2024-05-15 19:43:13.013762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:03.371 [2024-05-15 19:43:13.013775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.013780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.013791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.013797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.013808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.013814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.013826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.013832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.013843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.013848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.013860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.013866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.013878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.013883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.014957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.014962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.015296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.015302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.015319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.015325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.015338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.015346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.015358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.015364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.015377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.015383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.015396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.015402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.015415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.372 [2024-05-15 19:43:13.015420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.015434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.372 [2024-05-15 19:43:13.015439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.015453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.372 [2024-05-15 19:43:13.015459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.015472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.372 [2024-05-15 19:43:13.015477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.015491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.372 [2024-05-15 19:43:13.015497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.015511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.372 [2024-05-15 19:43:13.015516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.015529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.372 [2024-05-15 19:43:13.015536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:03.372 [2024-05-15 19:43:13.015550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.373 [2024-05-15 19:43:13.015556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.015569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.015576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.015646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.015653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.015668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.015673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.015687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.015692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.015706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.015712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.015725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.015731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.015744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.015750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.015764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.015770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.015784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.015790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.015906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.015913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.015928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.015934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.015949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.015955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.015970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.015976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.015993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.015998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.016972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.016977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.017115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.017122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.017137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.017143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.017159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.017165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.017181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.017187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.017203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.017209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.017224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.017230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:03.373 [2024-05-15 19:43:13.017246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.373 [2024-05-15 19:43:13.017252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:13.017268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:13.017273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:13.017416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:13.017425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:13.017441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:13.017447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:13.017463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:13.017469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:13.017485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:13.017490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:13.017507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:13.017512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:13.017528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:13.017534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:13.017550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:13.017556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:13.017572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:13.017578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:13.017611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:13.017618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.044990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.044997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.045007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.374 [2024-05-15 19:43:27.045012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:03.374 [2024-05-15 19:43:27.045022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.045027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.045037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.045042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.045052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.045057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.045068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.045073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.045083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.045088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.045098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.375 [2024-05-15 19:43:27.045103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.045114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.375 [2024-05-15 19:43:27.045119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.045950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.045962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.045975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.045980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.045991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.045996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.046014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.046030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.046046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.046061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.046076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.046092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.046107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.046122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.046138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.046153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.046168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.046183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.046199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.046216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.046231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.046246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.375 [2024-05-15 19:43:27.046262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.375 [2024-05-15 19:43:27.046277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.375 [2024-05-15 19:43:27.046291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.046307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.046327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.046342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.046357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:03.375 [2024-05-15 19:43:27.046565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.375 [2024-05-15 19:43:27.046573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:03.375 Received shutdown signal, test time was about 29.301471 seconds 00:28:03.375 00:28:03.375 Latency(us) 00:28:03.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.375 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:03.375 Verification LBA range: start 0x0 length 0x4000 00:28:03.375 Nvme0n1 : 29.30 9305.19 36.35 0.00 0.00 13734.04 190.29 3075822.93 00:28:03.375 =================================================================================================================== 00:28:03.375 Total : 9305.19 36.35 0.00 0.00 13734.04 190.29 3075822.93 00:28:03.375 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:03.636 rmmod nvme_tcp 00:28:03.636 rmmod nvme_fabrics 00:28:03.636 rmmod nvme_keyring 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3739348 ']' 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3739348 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3739348 ']' 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3739348 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3739348 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3739348' 00:28:03.636 killing process with pid 3739348 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3739348 00:28:03.636 [2024-05-15 19:43:29.783544] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:03.636 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3739348 00:28:03.897 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:03.897 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:03.897 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:03.897 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:03.897 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:03.897 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.897 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:03.897 19:43:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.444 19:43:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:06.444 00:28:06.444 real 0m44.270s 00:28:06.444 user 1m55.245s 00:28:06.444 sys 0m12.511s 00:28:06.444 19:43:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:06.444 19:43:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:06.444 ************************************ 00:28:06.444 END TEST nvmf_host_multipath_status 00:28:06.444 ************************************ 00:28:06.444 19:43:32 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:06.444 19:43:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:06.444 19:43:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:06.444 19:43:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:06.444 ************************************ 00:28:06.444 START TEST nvmf_discovery_remove_ifc 00:28:06.444 ************************************ 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:06.444 * Looking for test storage... 00:28:06.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:06.444 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:06.445 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:06.445 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:06.445 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:06.445 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:06.445 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:06.445 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:06.445 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:06.445 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:06.445 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:06.445 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:06.445 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:06.445 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:06.445 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.445 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:06.445 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.445 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:06.445 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:06.445 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:28:06.445 19:43:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:14.588 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:14.588 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.588 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:14.589 Found net devices under 0000:31:00.0: cvl_0_0 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:14.589 Found net devices under 0000:31:00.1: cvl_0_1 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:14.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:28:14.589 00:28:14.589 --- 10.0.0.2 ping statistics --- 00:28:14.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.589 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:14.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:28:14.589 00:28:14.589 --- 10.0.0.1 ping statistics --- 00:28:14.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.589 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:14.589 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:14.849 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3750628 00:28:14.849 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3750628 00:28:14.849 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3750628 ']' 00:28:14.849 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:14.849 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.849 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:14.849 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.849 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:14.849 19:43:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:14.849 [2024-05-15 19:43:40.836689] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:28:14.849 [2024-05-15 19:43:40.836755] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.849 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.849 [2024-05-15 19:43:40.914076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.849 [2024-05-15 19:43:40.986443] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.849 [2024-05-15 19:43:40.986484] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.849 [2024-05-15 19:43:40.986492] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:14.849 [2024-05-15 19:43:40.986500] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:14.849 [2024-05-15 19:43:40.986507] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.849 [2024-05-15 19:43:40.986532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.789 19:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:15.789 19:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:28:15.789 19:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:15.789 19:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:15.789 19:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:15.789 19:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.789 19:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:15.789 19:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.789 19:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:15.789 [2024-05-15 19:43:41.753600] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.789 [2024-05-15 19:43:41.761572] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:15.789 [2024-05-15 19:43:41.761764] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:15.789 null0 00:28:15.789 [2024-05-15 19:43:41.793759] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.789 19:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.789 19:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3750944 00:28:15.789 19:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3750944 /tmp/host.sock 00:28:15.789 19:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:15.789 19:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3750944 ']' 00:28:15.789 19:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:28:15.789 19:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:15.789 19:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:15.789 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:15.789 19:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:15.789 19:43:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:15.789 [2024-05-15 19:43:41.875287] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:28:15.789 [2024-05-15 19:43:41.875343] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3750944 ] 00:28:15.789 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.789 [2024-05-15 19:43:41.955238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.049 [2024-05-15 19:43:42.019926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.619 19:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:16.619 19:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:28:16.619 19:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:16.619 19:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:16.619 19:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.619 19:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:16.619 19:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.619 19:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:16.619 19:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.619 19:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:16.619 19:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.619 19:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:16.619 19:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.619 19:43:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:18.001 [2024-05-15 19:43:43.864583] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:18.001 [2024-05-15 19:43:43.864611] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:18.001 [2024-05-15 19:43:43.864625] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:18.001 [2024-05-15 19:43:43.993030] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:18.262 [2024-05-15 19:43:44.218160] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:18.262 [2024-05-15 19:43:44.218208] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:18.262 [2024-05-15 19:43:44.218232] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:18.262 [2024-05-15 19:43:44.218246] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:18.262 [2024-05-15 19:43:44.218266] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:18.262 [2024-05-15 19:43:44.222261] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1ff9b20 was disconnected and freed. delete nvme_qpair. 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:18.262 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.523 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:18.523 19:43:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:19.463 19:43:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:19.463 19:43:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:19.463 19:43:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:19.463 19:43:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.463 19:43:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:19.463 19:43:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:19.463 19:43:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:19.463 19:43:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.463 19:43:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:19.463 19:43:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:20.403 19:43:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:20.403 19:43:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:20.403 19:43:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:20.403 19:43:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.403 19:43:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:20.403 19:43:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:20.403 19:43:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:20.403 19:43:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.403 19:43:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:20.403 19:43:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:21.788 19:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:21.788 19:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:21.788 19:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:21.788 19:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:21.788 19:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.788 19:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:21.788 19:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:21.788 19:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.788 19:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:21.788 19:43:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:22.730 19:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:22.730 19:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:22.730 19:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:22.730 19:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.730 19:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:22.730 19:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:22.730 19:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:22.730 19:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.730 19:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:22.730 19:43:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:23.736 [2024-05-15 19:43:49.658544] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:23.736 [2024-05-15 19:43:49.658587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.736 [2024-05-15 19:43:49.658598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.736 [2024-05-15 19:43:49.658608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.736 [2024-05-15 19:43:49.658616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.736 [2024-05-15 19:43:49.658624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.736 [2024-05-15 19:43:49.658631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.736 [2024-05-15 19:43:49.658638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.736 [2024-05-15 19:43:49.658645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.736 [2024-05-15 19:43:49.658653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.736 [2024-05-15 19:43:49.658660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.736 [2024-05-15 19:43:49.658668] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc0e10 is same with the state(5) to be set 00:28:23.736 [2024-05-15 19:43:49.668564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc0e10 (9): Bad file descriptor 00:28:23.736 [2024-05-15 19:43:49.678606] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:23.736 19:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:23.736 19:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:23.736 19:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:23.736 19:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:23.736 19:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.736 19:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:23.736 19:43:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:24.678 [2024-05-15 19:43:50.706554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:25.619 [2024-05-15 19:43:51.730412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:25.619 [2024-05-15 19:43:51.730502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fc0e10 with addr=10.0.0.2, port=4420 00:28:25.619 [2024-05-15 19:43:51.730536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc0e10 is same with the state(5) to be set 00:28:25.619 [2024-05-15 19:43:51.731614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc0e10 (9): Bad file descriptor 00:28:25.619 [2024-05-15 19:43:51.731691] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.619 [2024-05-15 19:43:51.731741] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:25.619 [2024-05-15 19:43:51.731797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:25.620 [2024-05-15 19:43:51.731825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.620 [2024-05-15 19:43:51.731853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:25.620 [2024-05-15 19:43:51.731874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.620 [2024-05-15 19:43:51.731897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:25.620 [2024-05-15 19:43:51.731919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.620 [2024-05-15 19:43:51.731942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:25.620 [2024-05-15 19:43:51.731964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.620 [2024-05-15 19:43:51.731988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:25.620 [2024-05-15 19:43:51.732009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.620 [2024-05-15 19:43:51.732030] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:25.620 [2024-05-15 19:43:51.732059] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc02a0 (9): Bad file descriptor 00:28:25.620 [2024-05-15 19:43:51.732724] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:25.620 [2024-05-15 19:43:51.732757] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:25.620 19:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.620 19:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:25.620 19:43:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:27.005 19:43:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:27.575 [2024-05-15 19:43:53.748055] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:27.575 [2024-05-15 19:43:53.748077] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:27.575 [2024-05-15 19:43:53.748091] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:27.835 [2024-05-15 19:43:53.878550] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:27.835 [2024-05-15 19:43:53.978600] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:27.835 [2024-05-15 19:43:53.978637] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:27.835 [2024-05-15 19:43:53.978659] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:27.835 [2024-05-15 19:43:53.978674] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:27.835 [2024-05-15 19:43:53.978682] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:27.835 19:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:27.835 19:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:27.835 19:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:27.835 19:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.835 19:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:27.836 19:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:27.836 19:43:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:27.836 [2024-05-15 19:43:53.986607] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1fcdae0 was disconnected and freed. delete nvme_qpair. 00:28:27.836 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3750944 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3750944 ']' 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3750944 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3750944 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3750944' 00:28:28.097 killing process with pid 3750944 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3750944 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3750944 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:28.097 rmmod nvme_tcp 00:28:28.097 rmmod nvme_fabrics 00:28:28.097 rmmod nvme_keyring 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3750628 ']' 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3750628 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3750628 ']' 00:28:28.097 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3750628 00:28:28.358 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:28:28.358 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:28.358 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3750628 00:28:28.358 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:28.358 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:28.358 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3750628' 00:28:28.358 killing process with pid 3750628 00:28:28.358 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3750628 00:28:28.358 [2024-05-15 19:43:54.338677] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:28.358 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3750628 00:28:28.358 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:28.358 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:28.358 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:28.358 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:28.358 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:28.358 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.358 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:28.358 19:43:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.903 19:43:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:30.903 00:28:30.903 real 0m24.432s 00:28:30.903 user 0m26.896s 00:28:30.903 sys 0m7.661s 00:28:30.903 19:43:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:30.903 19:43:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:30.903 ************************************ 00:28:30.903 END TEST nvmf_discovery_remove_ifc 00:28:30.903 ************************************ 00:28:30.903 19:43:56 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:30.903 19:43:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:30.903 19:43:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:30.903 19:43:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:30.903 ************************************ 00:28:30.903 START TEST nvmf_identify_kernel_target 00:28:30.903 ************************************ 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:30.903 * Looking for test storage... 00:28:30.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:28:30.903 19:43:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:39.046 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:39.046 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:39.046 Found net devices under 0000:31:00.0: cvl_0_0 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:39.046 Found net devices under 0000:31:00.1: cvl_0_1 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:39.046 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:39.047 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:39.047 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:39.047 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:39.047 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:39.047 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.047 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.047 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.047 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:39.047 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.047 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.047 19:44:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:39.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.716 ms 00:28:39.047 00:28:39.047 --- 10.0.0.2 ping statistics --- 00:28:39.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.047 rtt min/avg/max/mdev = 0.716/0.716/0.716/0.000 ms 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.400 ms 00:28:39.047 00:28:39.047 --- 10.0.0.1 ping statistics --- 00:28:39.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.047 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:39.047 19:44:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:43.266 Waiting for block devices as requested 00:28:43.266 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:43.266 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:43.266 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:43.266 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:43.266 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:43.266 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:43.266 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:43.266 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:43.266 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:43.527 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:43.527 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:43.788 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:43.788 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:43.788 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:44.049 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:44.049 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:44.049 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:44.309 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:44.309 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:44.309 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:44.309 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:28:44.309 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:44.309 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:28:44.309 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:44.309 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:44.309 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:44.571 No valid GPT data, bailing 00:28:44.572 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:44.572 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:28:44.572 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:28:44.572 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:44.572 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:44.572 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:44.572 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:44.572 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:44.572 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:44.572 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:28:44.572 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:44.572 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:28:44.572 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:44.572 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:28:44.572 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:28:44.572 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:28:44.572 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:44.572 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:28:44.572 00:28:44.572 Discovery Log Number of Records 2, Generation counter 2 00:28:44.572 =====Discovery Log Entry 0====== 00:28:44.572 trtype: tcp 00:28:44.572 adrfam: ipv4 00:28:44.572 subtype: current discovery subsystem 00:28:44.572 treq: not specified, sq flow control disable supported 00:28:44.572 portid: 1 00:28:44.572 trsvcid: 4420 00:28:44.572 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:44.572 traddr: 10.0.0.1 00:28:44.572 eflags: none 00:28:44.572 sectype: none 00:28:44.572 =====Discovery Log Entry 1====== 00:28:44.572 trtype: tcp 00:28:44.572 adrfam: ipv4 00:28:44.572 subtype: nvme subsystem 00:28:44.572 treq: not specified, sq flow control disable supported 00:28:44.572 portid: 1 00:28:44.572 trsvcid: 4420 00:28:44.572 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:44.572 traddr: 10.0.0.1 00:28:44.572 eflags: none 00:28:44.572 sectype: none 00:28:44.572 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:44.572 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:44.572 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.572 ===================================================== 00:28:44.572 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:44.572 ===================================================== 00:28:44.572 Controller Capabilities/Features 00:28:44.572 ================================ 00:28:44.572 Vendor ID: 0000 00:28:44.572 Subsystem Vendor ID: 0000 00:28:44.572 Serial Number: 779e7a4d7a5ae135149c 00:28:44.572 Model Number: Linux 00:28:44.572 Firmware Version: 6.7.0-68 00:28:44.572 Recommended Arb Burst: 0 00:28:44.572 IEEE OUI Identifier: 00 00 00 00:28:44.572 Multi-path I/O 00:28:44.572 May have multiple subsystem ports: No 00:28:44.572 May have multiple controllers: No 00:28:44.572 Associated with SR-IOV VF: No 00:28:44.572 Max Data Transfer Size: Unlimited 00:28:44.572 Max Number of Namespaces: 0 00:28:44.572 Max Number of I/O Queues: 1024 00:28:44.572 NVMe Specification Version (VS): 1.3 00:28:44.572 NVMe Specification Version (Identify): 1.3 00:28:44.572 Maximum Queue Entries: 1024 00:28:44.572 Contiguous Queues Required: No 00:28:44.572 Arbitration Mechanisms Supported 00:28:44.572 Weighted Round Robin: Not Supported 00:28:44.572 Vendor Specific: Not Supported 00:28:44.572 Reset Timeout: 7500 ms 00:28:44.572 Doorbell Stride: 4 bytes 00:28:44.572 NVM Subsystem Reset: Not Supported 00:28:44.572 Command Sets Supported 00:28:44.572 NVM Command Set: Supported 00:28:44.572 Boot Partition: Not Supported 00:28:44.572 Memory Page Size Minimum: 4096 bytes 00:28:44.572 Memory Page Size Maximum: 4096 bytes 00:28:44.572 Persistent Memory Region: Not Supported 00:28:44.572 Optional Asynchronous Events Supported 00:28:44.572 Namespace Attribute Notices: Not Supported 00:28:44.572 Firmware Activation Notices: Not Supported 00:28:44.572 ANA Change Notices: Not Supported 00:28:44.572 PLE Aggregate Log Change Notices: Not Supported 00:28:44.572 LBA Status Info Alert Notices: Not Supported 00:28:44.572 EGE Aggregate Log Change Notices: Not Supported 00:28:44.572 Normal NVM Subsystem Shutdown event: Not Supported 00:28:44.572 Zone Descriptor Change Notices: Not Supported 00:28:44.572 Discovery Log Change Notices: Supported 00:28:44.572 Controller Attributes 00:28:44.572 128-bit Host Identifier: Not Supported 00:28:44.572 Non-Operational Permissive Mode: Not Supported 00:28:44.572 NVM Sets: Not Supported 00:28:44.572 Read Recovery Levels: Not Supported 00:28:44.572 Endurance Groups: Not Supported 00:28:44.572 Predictable Latency Mode: Not Supported 00:28:44.572 Traffic Based Keep ALive: Not Supported 00:28:44.572 Namespace Granularity: Not Supported 00:28:44.572 SQ Associations: Not Supported 00:28:44.572 UUID List: Not Supported 00:28:44.572 Multi-Domain Subsystem: Not Supported 00:28:44.572 Fixed Capacity Management: Not Supported 00:28:44.572 Variable Capacity Management: Not Supported 00:28:44.572 Delete Endurance Group: Not Supported 00:28:44.572 Delete NVM Set: Not Supported 00:28:44.572 Extended LBA Formats Supported: Not Supported 00:28:44.572 Flexible Data Placement Supported: Not Supported 00:28:44.572 00:28:44.572 Controller Memory Buffer Support 00:28:44.572 ================================ 00:28:44.572 Supported: No 00:28:44.572 00:28:44.572 Persistent Memory Region Support 00:28:44.572 ================================ 00:28:44.572 Supported: No 00:28:44.572 00:28:44.572 Admin Command Set Attributes 00:28:44.572 ============================ 00:28:44.572 Security Send/Receive: Not Supported 00:28:44.572 Format NVM: Not Supported 00:28:44.572 Firmware Activate/Download: Not Supported 00:28:44.572 Namespace Management: Not Supported 00:28:44.572 Device Self-Test: Not Supported 00:28:44.572 Directives: Not Supported 00:28:44.572 NVMe-MI: Not Supported 00:28:44.572 Virtualization Management: Not Supported 00:28:44.572 Doorbell Buffer Config: Not Supported 00:28:44.572 Get LBA Status Capability: Not Supported 00:28:44.572 Command & Feature Lockdown Capability: Not Supported 00:28:44.572 Abort Command Limit: 1 00:28:44.572 Async Event Request Limit: 1 00:28:44.572 Number of Firmware Slots: N/A 00:28:44.572 Firmware Slot 1 Read-Only: N/A 00:28:44.572 Firmware Activation Without Reset: N/A 00:28:44.572 Multiple Update Detection Support: N/A 00:28:44.572 Firmware Update Granularity: No Information Provided 00:28:44.572 Per-Namespace SMART Log: No 00:28:44.572 Asymmetric Namespace Access Log Page: Not Supported 00:28:44.572 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:44.572 Command Effects Log Page: Not Supported 00:28:44.572 Get Log Page Extended Data: Supported 00:28:44.572 Telemetry Log Pages: Not Supported 00:28:44.572 Persistent Event Log Pages: Not Supported 00:28:44.572 Supported Log Pages Log Page: May Support 00:28:44.572 Commands Supported & Effects Log Page: Not Supported 00:28:44.572 Feature Identifiers & Effects Log Page:May Support 00:28:44.572 NVMe-MI Commands & Effects Log Page: May Support 00:28:44.572 Data Area 4 for Telemetry Log: Not Supported 00:28:44.572 Error Log Page Entries Supported: 1 00:28:44.572 Keep Alive: Not Supported 00:28:44.572 00:28:44.572 NVM Command Set Attributes 00:28:44.572 ========================== 00:28:44.572 Submission Queue Entry Size 00:28:44.572 Max: 1 00:28:44.572 Min: 1 00:28:44.572 Completion Queue Entry Size 00:28:44.572 Max: 1 00:28:44.572 Min: 1 00:28:44.572 Number of Namespaces: 0 00:28:44.572 Compare Command: Not Supported 00:28:44.572 Write Uncorrectable Command: Not Supported 00:28:44.572 Dataset Management Command: Not Supported 00:28:44.572 Write Zeroes Command: Not Supported 00:28:44.572 Set Features Save Field: Not Supported 00:28:44.572 Reservations: Not Supported 00:28:44.572 Timestamp: Not Supported 00:28:44.572 Copy: Not Supported 00:28:44.572 Volatile Write Cache: Not Present 00:28:44.572 Atomic Write Unit (Normal): 1 00:28:44.572 Atomic Write Unit (PFail): 1 00:28:44.572 Atomic Compare & Write Unit: 1 00:28:44.572 Fused Compare & Write: Not Supported 00:28:44.572 Scatter-Gather List 00:28:44.572 SGL Command Set: Supported 00:28:44.572 SGL Keyed: Not Supported 00:28:44.572 SGL Bit Bucket Descriptor: Not Supported 00:28:44.572 SGL Metadata Pointer: Not Supported 00:28:44.572 Oversized SGL: Not Supported 00:28:44.572 SGL Metadata Address: Not Supported 00:28:44.572 SGL Offset: Supported 00:28:44.572 Transport SGL Data Block: Not Supported 00:28:44.572 Replay Protected Memory Block: Not Supported 00:28:44.572 00:28:44.572 Firmware Slot Information 00:28:44.572 ========================= 00:28:44.572 Active slot: 0 00:28:44.572 00:28:44.572 00:28:44.572 Error Log 00:28:44.572 ========= 00:28:44.572 00:28:44.572 Active Namespaces 00:28:44.572 ================= 00:28:44.573 Discovery Log Page 00:28:44.573 ================== 00:28:44.573 Generation Counter: 2 00:28:44.573 Number of Records: 2 00:28:44.573 Record Format: 0 00:28:44.573 00:28:44.573 Discovery Log Entry 0 00:28:44.573 ---------------------- 00:28:44.573 Transport Type: 3 (TCP) 00:28:44.573 Address Family: 1 (IPv4) 00:28:44.573 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:44.573 Entry Flags: 00:28:44.573 Duplicate Returned Information: 0 00:28:44.573 Explicit Persistent Connection Support for Discovery: 0 00:28:44.573 Transport Requirements: 00:28:44.573 Secure Channel: Not Specified 00:28:44.573 Port ID: 1 (0x0001) 00:28:44.573 Controller ID: 65535 (0xffff) 00:28:44.573 Admin Max SQ Size: 32 00:28:44.573 Transport Service Identifier: 4420 00:28:44.573 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:44.573 Transport Address: 10.0.0.1 00:28:44.573 Discovery Log Entry 1 00:28:44.573 ---------------------- 00:28:44.573 Transport Type: 3 (TCP) 00:28:44.573 Address Family: 1 (IPv4) 00:28:44.573 Subsystem Type: 2 (NVM Subsystem) 00:28:44.573 Entry Flags: 00:28:44.573 Duplicate Returned Information: 0 00:28:44.573 Explicit Persistent Connection Support for Discovery: 0 00:28:44.573 Transport Requirements: 00:28:44.573 Secure Channel: Not Specified 00:28:44.573 Port ID: 1 (0x0001) 00:28:44.573 Controller ID: 65535 (0xffff) 00:28:44.573 Admin Max SQ Size: 32 00:28:44.573 Transport Service Identifier: 4420 00:28:44.573 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:44.573 Transport Address: 10.0.0.1 00:28:44.573 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:44.834 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.835 get_feature(0x01) failed 00:28:44.835 get_feature(0x02) failed 00:28:44.835 get_feature(0x04) failed 00:28:44.835 ===================================================== 00:28:44.835 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:44.835 ===================================================== 00:28:44.835 Controller Capabilities/Features 00:28:44.835 ================================ 00:28:44.835 Vendor ID: 0000 00:28:44.835 Subsystem Vendor ID: 0000 00:28:44.835 Serial Number: a914f09011d95b3e30e4 00:28:44.835 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:44.835 Firmware Version: 6.7.0-68 00:28:44.835 Recommended Arb Burst: 6 00:28:44.835 IEEE OUI Identifier: 00 00 00 00:28:44.835 Multi-path I/O 00:28:44.835 May have multiple subsystem ports: Yes 00:28:44.835 May have multiple controllers: Yes 00:28:44.835 Associated with SR-IOV VF: No 00:28:44.835 Max Data Transfer Size: Unlimited 00:28:44.835 Max Number of Namespaces: 1024 00:28:44.835 Max Number of I/O Queues: 128 00:28:44.835 NVMe Specification Version (VS): 1.3 00:28:44.835 NVMe Specification Version (Identify): 1.3 00:28:44.835 Maximum Queue Entries: 1024 00:28:44.835 Contiguous Queues Required: No 00:28:44.835 Arbitration Mechanisms Supported 00:28:44.835 Weighted Round Robin: Not Supported 00:28:44.835 Vendor Specific: Not Supported 00:28:44.835 Reset Timeout: 7500 ms 00:28:44.835 Doorbell Stride: 4 bytes 00:28:44.835 NVM Subsystem Reset: Not Supported 00:28:44.835 Command Sets Supported 00:28:44.835 NVM Command Set: Supported 00:28:44.835 Boot Partition: Not Supported 00:28:44.835 Memory Page Size Minimum: 4096 bytes 00:28:44.835 Memory Page Size Maximum: 4096 bytes 00:28:44.835 Persistent Memory Region: Not Supported 00:28:44.835 Optional Asynchronous Events Supported 00:28:44.835 Namespace Attribute Notices: Supported 00:28:44.835 Firmware Activation Notices: Not Supported 00:28:44.835 ANA Change Notices: Supported 00:28:44.835 PLE Aggregate Log Change Notices: Not Supported 00:28:44.835 LBA Status Info Alert Notices: Not Supported 00:28:44.835 EGE Aggregate Log Change Notices: Not Supported 00:28:44.835 Normal NVM Subsystem Shutdown event: Not Supported 00:28:44.835 Zone Descriptor Change Notices: Not Supported 00:28:44.835 Discovery Log Change Notices: Not Supported 00:28:44.835 Controller Attributes 00:28:44.835 128-bit Host Identifier: Supported 00:28:44.835 Non-Operational Permissive Mode: Not Supported 00:28:44.835 NVM Sets: Not Supported 00:28:44.835 Read Recovery Levels: Not Supported 00:28:44.835 Endurance Groups: Not Supported 00:28:44.835 Predictable Latency Mode: Not Supported 00:28:44.835 Traffic Based Keep ALive: Supported 00:28:44.835 Namespace Granularity: Not Supported 00:28:44.835 SQ Associations: Not Supported 00:28:44.835 UUID List: Not Supported 00:28:44.835 Multi-Domain Subsystem: Not Supported 00:28:44.835 Fixed Capacity Management: Not Supported 00:28:44.835 Variable Capacity Management: Not Supported 00:28:44.835 Delete Endurance Group: Not Supported 00:28:44.835 Delete NVM Set: Not Supported 00:28:44.835 Extended LBA Formats Supported: Not Supported 00:28:44.835 Flexible Data Placement Supported: Not Supported 00:28:44.835 00:28:44.835 Controller Memory Buffer Support 00:28:44.835 ================================ 00:28:44.835 Supported: No 00:28:44.835 00:28:44.835 Persistent Memory Region Support 00:28:44.835 ================================ 00:28:44.835 Supported: No 00:28:44.835 00:28:44.835 Admin Command Set Attributes 00:28:44.835 ============================ 00:28:44.835 Security Send/Receive: Not Supported 00:28:44.835 Format NVM: Not Supported 00:28:44.835 Firmware Activate/Download: Not Supported 00:28:44.835 Namespace Management: Not Supported 00:28:44.835 Device Self-Test: Not Supported 00:28:44.835 Directives: Not Supported 00:28:44.835 NVMe-MI: Not Supported 00:28:44.835 Virtualization Management: Not Supported 00:28:44.835 Doorbell Buffer Config: Not Supported 00:28:44.835 Get LBA Status Capability: Not Supported 00:28:44.835 Command & Feature Lockdown Capability: Not Supported 00:28:44.835 Abort Command Limit: 4 00:28:44.835 Async Event Request Limit: 4 00:28:44.835 Number of Firmware Slots: N/A 00:28:44.835 Firmware Slot 1 Read-Only: N/A 00:28:44.835 Firmware Activation Without Reset: N/A 00:28:44.835 Multiple Update Detection Support: N/A 00:28:44.835 Firmware Update Granularity: No Information Provided 00:28:44.835 Per-Namespace SMART Log: Yes 00:28:44.835 Asymmetric Namespace Access Log Page: Supported 00:28:44.835 ANA Transition Time : 10 sec 00:28:44.835 00:28:44.835 Asymmetric Namespace Access Capabilities 00:28:44.835 ANA Optimized State : Supported 00:28:44.835 ANA Non-Optimized State : Supported 00:28:44.835 ANA Inaccessible State : Supported 00:28:44.835 ANA Persistent Loss State : Supported 00:28:44.835 ANA Change State : Supported 00:28:44.835 ANAGRPID is not changed : No 00:28:44.835 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:44.835 00:28:44.835 ANA Group Identifier Maximum : 128 00:28:44.835 Number of ANA Group Identifiers : 128 00:28:44.835 Max Number of Allowed Namespaces : 1024 00:28:44.835 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:44.835 Command Effects Log Page: Supported 00:28:44.835 Get Log Page Extended Data: Supported 00:28:44.835 Telemetry Log Pages: Not Supported 00:28:44.835 Persistent Event Log Pages: Not Supported 00:28:44.835 Supported Log Pages Log Page: May Support 00:28:44.835 Commands Supported & Effects Log Page: Not Supported 00:28:44.835 Feature Identifiers & Effects Log Page:May Support 00:28:44.835 NVMe-MI Commands & Effects Log Page: May Support 00:28:44.835 Data Area 4 for Telemetry Log: Not Supported 00:28:44.835 Error Log Page Entries Supported: 128 00:28:44.835 Keep Alive: Supported 00:28:44.835 Keep Alive Granularity: 1000 ms 00:28:44.835 00:28:44.835 NVM Command Set Attributes 00:28:44.835 ========================== 00:28:44.835 Submission Queue Entry Size 00:28:44.835 Max: 64 00:28:44.835 Min: 64 00:28:44.835 Completion Queue Entry Size 00:28:44.835 Max: 16 00:28:44.835 Min: 16 00:28:44.835 Number of Namespaces: 1024 00:28:44.835 Compare Command: Not Supported 00:28:44.835 Write Uncorrectable Command: Not Supported 00:28:44.835 Dataset Management Command: Supported 00:28:44.835 Write Zeroes Command: Supported 00:28:44.835 Set Features Save Field: Not Supported 00:28:44.835 Reservations: Not Supported 00:28:44.835 Timestamp: Not Supported 00:28:44.835 Copy: Not Supported 00:28:44.835 Volatile Write Cache: Present 00:28:44.835 Atomic Write Unit (Normal): 1 00:28:44.835 Atomic Write Unit (PFail): 1 00:28:44.835 Atomic Compare & Write Unit: 1 00:28:44.835 Fused Compare & Write: Not Supported 00:28:44.835 Scatter-Gather List 00:28:44.835 SGL Command Set: Supported 00:28:44.835 SGL Keyed: Not Supported 00:28:44.835 SGL Bit Bucket Descriptor: Not Supported 00:28:44.835 SGL Metadata Pointer: Not Supported 00:28:44.835 Oversized SGL: Not Supported 00:28:44.835 SGL Metadata Address: Not Supported 00:28:44.835 SGL Offset: Supported 00:28:44.835 Transport SGL Data Block: Not Supported 00:28:44.835 Replay Protected Memory Block: Not Supported 00:28:44.835 00:28:44.835 Firmware Slot Information 00:28:44.835 ========================= 00:28:44.835 Active slot: 0 00:28:44.835 00:28:44.835 Asymmetric Namespace Access 00:28:44.835 =========================== 00:28:44.835 Change Count : 0 00:28:44.835 Number of ANA Group Descriptors : 1 00:28:44.835 ANA Group Descriptor : 0 00:28:44.835 ANA Group ID : 1 00:28:44.835 Number of NSID Values : 1 00:28:44.835 Change Count : 0 00:28:44.835 ANA State : 1 00:28:44.835 Namespace Identifier : 1 00:28:44.835 00:28:44.835 Commands Supported and Effects 00:28:44.835 ============================== 00:28:44.835 Admin Commands 00:28:44.835 -------------- 00:28:44.835 Get Log Page (02h): Supported 00:28:44.835 Identify (06h): Supported 00:28:44.835 Abort (08h): Supported 00:28:44.835 Set Features (09h): Supported 00:28:44.835 Get Features (0Ah): Supported 00:28:44.835 Asynchronous Event Request (0Ch): Supported 00:28:44.835 Keep Alive (18h): Supported 00:28:44.835 I/O Commands 00:28:44.835 ------------ 00:28:44.835 Flush (00h): Supported 00:28:44.835 Write (01h): Supported LBA-Change 00:28:44.835 Read (02h): Supported 00:28:44.835 Write Zeroes (08h): Supported LBA-Change 00:28:44.835 Dataset Management (09h): Supported 00:28:44.835 00:28:44.835 Error Log 00:28:44.835 ========= 00:28:44.835 Entry: 0 00:28:44.835 Error Count: 0x3 00:28:44.835 Submission Queue Id: 0x0 00:28:44.835 Command Id: 0x5 00:28:44.835 Phase Bit: 0 00:28:44.835 Status Code: 0x2 00:28:44.835 Status Code Type: 0x0 00:28:44.835 Do Not Retry: 1 00:28:44.835 Error Location: 0x28 00:28:44.835 LBA: 0x0 00:28:44.835 Namespace: 0x0 00:28:44.835 Vendor Log Page: 0x0 00:28:44.835 ----------- 00:28:44.835 Entry: 1 00:28:44.836 Error Count: 0x2 00:28:44.836 Submission Queue Id: 0x0 00:28:44.836 Command Id: 0x5 00:28:44.836 Phase Bit: 0 00:28:44.836 Status Code: 0x2 00:28:44.836 Status Code Type: 0x0 00:28:44.836 Do Not Retry: 1 00:28:44.836 Error Location: 0x28 00:28:44.836 LBA: 0x0 00:28:44.836 Namespace: 0x0 00:28:44.836 Vendor Log Page: 0x0 00:28:44.836 ----------- 00:28:44.836 Entry: 2 00:28:44.836 Error Count: 0x1 00:28:44.836 Submission Queue Id: 0x0 00:28:44.836 Command Id: 0x4 00:28:44.836 Phase Bit: 0 00:28:44.836 Status Code: 0x2 00:28:44.836 Status Code Type: 0x0 00:28:44.836 Do Not Retry: 1 00:28:44.836 Error Location: 0x28 00:28:44.836 LBA: 0x0 00:28:44.836 Namespace: 0x0 00:28:44.836 Vendor Log Page: 0x0 00:28:44.836 00:28:44.836 Number of Queues 00:28:44.836 ================ 00:28:44.836 Number of I/O Submission Queues: 128 00:28:44.836 Number of I/O Completion Queues: 128 00:28:44.836 00:28:44.836 ZNS Specific Controller Data 00:28:44.836 ============================ 00:28:44.836 Zone Append Size Limit: 0 00:28:44.836 00:28:44.836 00:28:44.836 Active Namespaces 00:28:44.836 ================= 00:28:44.836 get_feature(0x05) failed 00:28:44.836 Namespace ID:1 00:28:44.836 Command Set Identifier: NVM (00h) 00:28:44.836 Deallocate: Supported 00:28:44.836 Deallocated/Unwritten Error: Not Supported 00:28:44.836 Deallocated Read Value: Unknown 00:28:44.836 Deallocate in Write Zeroes: Not Supported 00:28:44.836 Deallocated Guard Field: 0xFFFF 00:28:44.836 Flush: Supported 00:28:44.836 Reservation: Not Supported 00:28:44.836 Namespace Sharing Capabilities: Multiple Controllers 00:28:44.836 Size (in LBAs): 3750748848 (1788GiB) 00:28:44.836 Capacity (in LBAs): 3750748848 (1788GiB) 00:28:44.836 Utilization (in LBAs): 3750748848 (1788GiB) 00:28:44.836 UUID: e0dacf49-d5f7-4660-bfc3-cfa8486f42c3 00:28:44.836 Thin Provisioning: Not Supported 00:28:44.836 Per-NS Atomic Units: Yes 00:28:44.836 Atomic Write Unit (Normal): 8 00:28:44.836 Atomic Write Unit (PFail): 8 00:28:44.836 Preferred Write Granularity: 8 00:28:44.836 Atomic Compare & Write Unit: 8 00:28:44.836 Atomic Boundary Size (Normal): 0 00:28:44.836 Atomic Boundary Size (PFail): 0 00:28:44.836 Atomic Boundary Offset: 0 00:28:44.836 NGUID/EUI64 Never Reused: No 00:28:44.836 ANA group ID: 1 00:28:44.836 Namespace Write Protected: No 00:28:44.836 Number of LBA Formats: 1 00:28:44.836 Current LBA Format: LBA Format #00 00:28:44.836 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:44.836 00:28:44.836 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:44.836 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:44.836 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:28:44.836 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:44.836 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:28:44.836 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:44.836 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:44.836 rmmod nvme_tcp 00:28:44.836 rmmod nvme_fabrics 00:28:44.836 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:44.836 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:28:44.836 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:28:44.836 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:44.836 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:44.836 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:44.836 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:44.836 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:44.836 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:44.836 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.836 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:44.836 19:44:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.380 19:44:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:47.380 19:44:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:47.380 19:44:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:47.380 19:44:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:28:47.380 19:44:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:47.380 19:44:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:47.380 19:44:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:47.380 19:44:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:47.380 19:44:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:47.380 19:44:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:47.380 19:44:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:50.680 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:50.680 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:50.939 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:50.939 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:50.939 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:50.939 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:50.939 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:50.939 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:50.939 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:50.939 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:50.940 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:50.940 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:50.940 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:50.940 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:50.940 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:50.940 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:50.940 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:51.512 00:28:51.512 real 0m20.799s 00:28:51.512 user 0m5.459s 00:28:51.512 sys 0m12.199s 00:28:51.512 19:44:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:51.512 19:44:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:51.512 ************************************ 00:28:51.512 END TEST nvmf_identify_kernel_target 00:28:51.512 ************************************ 00:28:51.512 19:44:17 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:51.512 19:44:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:51.512 19:44:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:51.512 19:44:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:51.512 ************************************ 00:28:51.512 START TEST nvmf_auth_host 00:28:51.512 ************************************ 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:51.512 * Looking for test storage... 00:28:51.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:28:51.512 19:44:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:59.656 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:59.656 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:59.656 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:59.657 Found net devices under 0000:31:00.0: cvl_0_0 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:59.657 Found net devices under 0000:31:00.1: cvl_0_1 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:59.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:28:59.657 00:28:59.657 --- 10.0.0.2 ping statistics --- 00:28:59.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.657 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:28:59.657 00:28:59.657 --- 10.0.0.1 ping statistics --- 00:28:59.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.657 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3766374 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3766374 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3766374 ']' 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:59.657 19:44:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=be05a3d5a928382f88a37df97d4fcc82 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.3xK 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key be05a3d5a928382f88a37df97d4fcc82 0 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 be05a3d5a928382f88a37df97d4fcc82 0 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=be05a3d5a928382f88a37df97d4fcc82 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:29:00.230 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.3xK 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.3xK 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.3xK 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=683a201e2675b843f8e88dcbb61b7a2327871629eeaea1a49b82f1c364bd5b41 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Awa 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 683a201e2675b843f8e88dcbb61b7a2327871629eeaea1a49b82f1c364bd5b41 3 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 683a201e2675b843f8e88dcbb61b7a2327871629eeaea1a49b82f1c364bd5b41 3 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=683a201e2675b843f8e88dcbb61b7a2327871629eeaea1a49b82f1c364bd5b41 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Awa 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Awa 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Awa 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c4c9dff8c720bc11efff28761ab5a68f94f6117cf939209b 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.rNi 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c4c9dff8c720bc11efff28761ab5a68f94f6117cf939209b 0 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c4c9dff8c720bc11efff28761ab5a68f94f6117cf939209b 0 00:29:00.493 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c4c9dff8c720bc11efff28761ab5a68f94f6117cf939209b 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.rNi 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.rNi 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.rNi 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=365561f061bd2c315f58b363808d13621bb25150f4906596 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.HNI 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 365561f061bd2c315f58b363808d13621bb25150f4906596 2 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 365561f061bd2c315f58b363808d13621bb25150f4906596 2 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=365561f061bd2c315f58b363808d13621bb25150f4906596 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:29:00.494 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.HNI 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.HNI 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.HNI 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=76a37becf02b3baccdb7fff622c246b5 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.3XQ 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 76a37becf02b3baccdb7fff622c246b5 1 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 76a37becf02b3baccdb7fff622c246b5 1 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=76a37becf02b3baccdb7fff622c246b5 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.3XQ 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.3XQ 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.3XQ 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=43a18e64c22d908a684722cc2edda7a1 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.7xV 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 43a18e64c22d908a684722cc2edda7a1 1 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 43a18e64c22d908a684722cc2edda7a1 1 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=43a18e64c22d908a684722cc2edda7a1 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.7xV 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.7xV 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.7xV 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c4cc4244b68840c9bae19170b641c8f05b7fd5ba411c7105 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Cmh 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c4cc4244b68840c9bae19170b641c8f05b7fd5ba411c7105 2 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c4cc4244b68840c9bae19170b641c8f05b7fd5ba411c7105 2 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c4cc4244b68840c9bae19170b641c8f05b7fd5ba411c7105 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Cmh 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Cmh 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Cmh 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fede60da80f9006ce94056f1ec955c9c 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Rbe 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fede60da80f9006ce94056f1ec955c9c 0 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fede60da80f9006ce94056f1ec955c9c 0 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fede60da80f9006ce94056f1ec955c9c 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:29:00.756 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:01.017 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Rbe 00:29:01.017 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Rbe 00:29:01.017 19:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Rbe 00:29:01.017 19:44:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:29:01.017 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:01.017 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:01.017 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:01.017 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:29:01.017 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:29:01.017 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:01.017 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f5028d9b4d1d40a595e3a5e6262a0a8c7cd37247cd842f44987fc5fe4c38f150 00:29:01.017 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:29:01.017 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Qwf 00:29:01.017 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f5028d9b4d1d40a595e3a5e6262a0a8c7cd37247cd842f44987fc5fe4c38f150 3 00:29:01.017 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f5028d9b4d1d40a595e3a5e6262a0a8c7cd37247cd842f44987fc5fe4c38f150 3 00:29:01.017 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:01.017 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:01.017 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f5028d9b4d1d40a595e3a5e6262a0a8c7cd37247cd842f44987fc5fe4c38f150 00:29:01.017 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:29:01.017 19:44:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:01.017 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Qwf 00:29:01.017 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Qwf 00:29:01.017 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Qwf 00:29:01.017 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:29:01.017 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3766374 00:29:01.017 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3766374 ']' 00:29:01.017 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.017 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:01.017 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.017 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:01.017 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3xK 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Awa ]] 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Awa 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.rNi 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.HNI ]] 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HNI 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.3XQ 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.7xV ]] 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7xV 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Cmh 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Rbe ]] 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Rbe 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Qwf 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:01.279 19:44:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:05.487 Waiting for block devices as requested 00:29:05.487 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:05.487 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:05.487 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:05.487 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:05.487 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:05.487 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:05.746 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:05.746 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:05.746 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:06.046 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:06.046 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:06.046 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:06.314 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:06.314 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:06.314 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:06.314 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:06.582 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:07.524 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:07.524 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:07.524 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:07.524 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:29:07.524 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:07.524 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:29:07.524 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:07.524 19:44:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:07.524 19:44:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:07.524 No valid GPT data, bailing 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:29:07.525 00:29:07.525 Discovery Log Number of Records 2, Generation counter 2 00:29:07.525 =====Discovery Log Entry 0====== 00:29:07.525 trtype: tcp 00:29:07.525 adrfam: ipv4 00:29:07.525 subtype: current discovery subsystem 00:29:07.525 treq: not specified, sq flow control disable supported 00:29:07.525 portid: 1 00:29:07.525 trsvcid: 4420 00:29:07.525 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:07.525 traddr: 10.0.0.1 00:29:07.525 eflags: none 00:29:07.525 sectype: none 00:29:07.525 =====Discovery Log Entry 1====== 00:29:07.525 trtype: tcp 00:29:07.525 adrfam: ipv4 00:29:07.525 subtype: nvme subsystem 00:29:07.525 treq: not specified, sq flow control disable supported 00:29:07.525 portid: 1 00:29:07.525 trsvcid: 4420 00:29:07.525 subnqn: nqn.2024-02.io.spdk:cnode0 00:29:07.525 traddr: 10.0.0.1 00:29:07.525 eflags: none 00:29:07.525 sectype: none 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: ]] 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.525 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.785 nvme0n1 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: ]] 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.785 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.046 nvme0n1 00:29:08.046 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.046 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.046 19:44:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.046 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.046 19:44:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: ]] 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.046 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.306 nvme0n1 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: ]] 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.306 nvme0n1 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.306 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: ]] 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:08.566 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.567 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:08.567 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:08.567 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:08.567 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:08.567 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.567 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.567 nvme0n1 00:29:08.567 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.567 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.567 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.567 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.567 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.567 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.567 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.567 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.567 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.567 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.827 nvme0n1 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: ]] 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:08.827 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:08.828 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:08.828 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.828 19:44:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:08.828 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.828 19:44:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.828 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.828 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.828 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:08.828 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:08.828 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:08.828 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.828 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.828 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:08.828 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.828 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:08.828 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:08.828 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:08.828 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:09.088 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.088 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.088 nvme0n1 00:29:09.088 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.088 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: ]] 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.089 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.350 nvme0n1 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.350 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: ]] 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.612 nvme0n1 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.612 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: ]] 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.873 19:44:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.873 nvme0n1 00:29:09.873 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.873 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.873 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.873 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.873 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.873 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.134 nvme0n1 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.134 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: ]] 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.394 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.654 nvme0n1 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: ]] 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.654 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.914 nvme0n1 00:29:10.914 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.914 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.914 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.914 19:44:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.914 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.914 19:44:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: ]] 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.914 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.174 nvme0n1 00:29:11.174 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.174 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.174 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.174 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.174 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.174 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.433 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.433 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.433 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.433 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.433 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.433 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.433 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:11.433 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: ]] 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.434 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.694 nvme0n1 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.694 19:44:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.955 nvme0n1 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: ]] 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.955 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.528 nvme0n1 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: ]] 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.529 19:44:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.098 nvme0n1 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: ]] 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.098 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.669 nvme0n1 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: ]] 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.669 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.670 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.670 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.670 19:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:13.670 19:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:13.670 19:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:13.670 19:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.670 19:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.670 19:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:13.670 19:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.670 19:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:13.670 19:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:13.670 19:44:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:13.670 19:44:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:13.670 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.670 19:44:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.238 nvme0n1 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:14.238 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.239 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.810 nvme0n1 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: ]] 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:14.810 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.811 19:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.749 nvme0n1 00:29:15.749 19:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.749 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.749 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.749 19:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.749 19:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.749 19:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.749 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: ]] 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.750 19:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.317 nvme0n1 00:29:16.317 19:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.317 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.318 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.318 19:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.318 19:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.318 19:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: ]] 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.577 19:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.149 nvme0n1 00:29:17.149 19:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.149 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.149 19:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.149 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.149 19:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.149 19:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.409 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.409 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.409 19:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.409 19:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.409 19:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.409 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: ]] 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.410 19:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.980 nvme0n1 00:29:17.980 19:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.980 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.980 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.980 19:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.980 19:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.980 19:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.240 19:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.809 nvme0n1 00:29:18.809 19:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.809 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.809 19:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.809 19:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.809 19:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.809 19:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: ]] 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.069 nvme0n1 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.069 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: ]] 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.329 nvme0n1 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: ]] 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:19.329 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.590 nvme0n1 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: ]] 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.590 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.850 nvme0n1 00:29:19.850 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.850 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.850 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.851 19:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.111 nvme0n1 00:29:20.111 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.111 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: ]] 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.112 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.372 nvme0n1 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: ]] 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.372 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.633 nvme0n1 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: ]] 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.633 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.893 nvme0n1 00:29:20.894 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.894 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.894 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.894 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.894 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.894 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.894 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.894 19:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.894 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.894 19:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: ]] 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.894 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.154 nvme0n1 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.154 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.414 nvme0n1 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: ]] 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:21.414 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.415 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.674 nvme0n1 00:29:21.674 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.674 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.674 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.674 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.674 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: ]] 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.934 19:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.194 nvme0n1 00:29:22.194 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: ]] 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.195 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.455 nvme0n1 00:29:22.456 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.456 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.456 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.456 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.456 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.456 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.456 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.456 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.456 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.456 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: ]] 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.716 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.977 nvme0n1 00:29:22.977 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.977 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.977 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.977 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.977 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.977 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.977 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.977 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.977 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.977 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.977 19:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.977 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.977 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:22.977 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.977 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:22.977 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:22.977 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:22.977 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:22.977 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:22.977 19:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:22.977 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:22.977 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:22.977 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:22.977 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:29:22.977 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.977 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:22.977 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:22.977 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:22.977 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.977 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:22.978 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.978 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.978 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.978 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.978 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:22.978 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:22.978 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:22.978 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.978 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.978 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:22.978 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.978 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:22.978 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:22.978 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:22.978 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:22.978 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.978 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.240 nvme0n1 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: ]] 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.240 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.809 nvme0n1 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: ]] 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:23.809 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:23.810 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:23.810 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.810 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.810 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:23.810 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.810 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:23.810 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:23.810 19:44:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:23.810 19:44:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:23.810 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.810 19:44:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.379 nvme0n1 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: ]] 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:24.379 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:24.380 19:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.380 19:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.950 nvme0n1 00:29:24.950 19:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.950 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.950 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.950 19:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.950 19:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.950 19:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.950 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.950 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.950 19:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.950 19:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.950 19:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.950 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.950 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:24.950 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.950 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:24.950 19:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: ]] 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:24.950 19:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:24.951 19:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:24.951 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:24.951 19:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.951 19:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.529 nvme0n1 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.529 19:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.102 nvme0n1 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: ]] 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.102 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.042 nvme0n1 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: ]] 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:27.042 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:27.043 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:27.043 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.043 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:27.043 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.043 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.043 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.043 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.043 19:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:27.043 19:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:27.043 19:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:27.043 19:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.043 19:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.043 19:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:27.043 19:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.043 19:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:27.043 19:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:27.043 19:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:27.043 19:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:27.043 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.043 19:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.614 nvme0n1 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: ]] 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.614 19:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.555 nvme0n1 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:28.555 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: ]] 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.556 19:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.493 nvme0n1 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.493 19:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.087 nvme0n1 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: ]] 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.087 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.361 nvme0n1 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: ]] 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.361 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.623 nvme0n1 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: ]] 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.623 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.884 nvme0n1 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: ]] 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:30.884 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:30.885 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:30.885 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.885 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:30.885 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.885 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.885 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.885 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.885 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:30.885 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:30.885 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:30.885 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.885 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.885 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:30.885 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.885 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:30.885 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:30.885 19:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:30.885 19:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:30.885 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.885 19:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.145 nvme0n1 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.146 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.406 nvme0n1 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: ]] 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.406 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.666 nvme0n1 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: ]] 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.666 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.927 nvme0n1 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: ]] 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.927 19:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.186 nvme0n1 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: ]] 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:32.186 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:32.187 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:32.187 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.187 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.446 nvme0n1 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.446 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.707 nvme0n1 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: ]] 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.707 19:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.968 nvme0n1 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: ]] 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.968 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.539 nvme0n1 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: ]] 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:33.539 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:33.540 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.540 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.540 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:33.540 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.540 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:33.540 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:33.540 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:33.540 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:33.540 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.540 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.800 nvme0n1 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: ]] 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.800 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:33.801 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:33.801 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:33.801 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.801 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.801 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:33.801 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.801 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:33.801 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:33.801 19:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:33.801 19:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:33.801 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.801 19:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.060 nvme0n1 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.060 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.630 nvme0n1 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: ]] 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.630 19:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.891 nvme0n1 00:29:34.891 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.891 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.891 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.891 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.891 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.891 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: ]] 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.151 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.411 nvme0n1 00:29:35.411 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.411 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.411 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.411 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.411 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.411 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: ]] 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.673 19:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.934 nvme0n1 00:29:35.934 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: ]] 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.198 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.769 nvme0n1 00:29:36.769 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.769 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.769 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.769 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.769 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.769 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.769 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.769 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.769 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.769 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.770 19:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.337 nvme0n1 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmUwNWEzZDVhOTI4MzgyZjg4YTM3ZGY5N2Q0ZmNjODIztVi3: 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: ]] 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgzYTIwMWUyNjc1Yjg0M2Y4ZTg4ZGNiYjYxYjdhMjMyNzg3MTYyOWVlYWVhMWE0OWI4MmYxYzM2NGJkNWI0MZGS72U=: 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.337 19:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.913 nvme0n1 00:29:37.913 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.913 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.913 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.913 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.913 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.913 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: ]] 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.175 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.746 nvme0n1 00:29:38.746 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.746 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.746 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.746 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.746 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.746 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzZhMzdiZWNmMDJiM2JhY2NkYjdmZmY2MjJjMjQ2YjUOHoho: 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: ]] 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNhMThlNjRjMjJkOTA4YTY4NDcyMmNjMmVkZGE3YTGV3HIc: 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.007 19:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.579 nvme0n1 00:29:39.579 19:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.579 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.579 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.579 19:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.579 19:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.579 19:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRjYzQyNDRiNjg4NDBjOWJhZTE5MTcwYjY0MWM4ZjA1YjdmZDViYTQxMWM3MTA1WBVZcA==: 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: ]] 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmVkZTYwZGE4MGY5MDA2Y2U5NDA1NmYxZWM5NTVjOWPWL4X1: 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.841 19:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.413 nvme0n1 00:29:40.413 19:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.413 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.413 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.413 19:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.413 19:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.413 19:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjUwMjhkOWI0ZDFkNDBhNTk1ZTNhNWU2MjYyYTBhOGM3Y2QzNzI0N2NkODQyZjQ0OTg3ZmM1ZmU0YzM4ZjE1MFh6Sqg=: 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.674 19:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.363 nvme0n1 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjOWRmZjhjNzIwYmMxMWVmZmYyODc2MWFiNWE2OGY5NGY2MTE3Y2Y5MzkyMDli+nWHXQ==: 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: ]] 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzY1NTYxZjA2MWJkMmMzMTVmNThiMzYzODA4ZDEzNjIxYmIyNTE1MGY0OTA2NTk2xEZYkg==: 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.363 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.364 request: 00:29:41.364 { 00:29:41.364 "name": "nvme0", 00:29:41.364 "trtype": "tcp", 00:29:41.364 "traddr": "10.0.0.1", 00:29:41.364 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:41.364 "adrfam": "ipv4", 00:29:41.364 "trsvcid": "4420", 00:29:41.364 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:41.364 "method": "bdev_nvme_attach_controller", 00:29:41.364 "req_id": 1 00:29:41.364 } 00:29:41.364 Got JSON-RPC error response 00:29:41.364 response: 00:29:41.364 { 00:29:41.364 "code": -32602, 00:29:41.364 "message": "Invalid parameters" 00:29:41.364 } 00:29:41.364 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:41.364 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:29:41.364 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:41.364 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:41.364 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:41.364 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.364 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:41.364 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.364 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.364 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.626 request: 00:29:41.626 { 00:29:41.626 "name": "nvme0", 00:29:41.626 "trtype": "tcp", 00:29:41.626 "traddr": "10.0.0.1", 00:29:41.626 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:41.626 "adrfam": "ipv4", 00:29:41.626 "trsvcid": "4420", 00:29:41.626 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:41.626 "dhchap_key": "key2", 00:29:41.626 "method": "bdev_nvme_attach_controller", 00:29:41.626 "req_id": 1 00:29:41.626 } 00:29:41.626 Got JSON-RPC error response 00:29:41.626 response: 00:29:41.626 { 00:29:41.626 "code": -32602, 00:29:41.626 "message": "Invalid parameters" 00:29:41.626 } 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.626 request: 00:29:41.626 { 00:29:41.626 "name": "nvme0", 00:29:41.626 "trtype": "tcp", 00:29:41.626 "traddr": "10.0.0.1", 00:29:41.626 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:41.626 "adrfam": "ipv4", 00:29:41.626 "trsvcid": "4420", 00:29:41.626 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:41.626 "dhchap_key": "key1", 00:29:41.626 "dhchap_ctrlr_key": "ckey2", 00:29:41.626 "method": "bdev_nvme_attach_controller", 00:29:41.626 "req_id": 1 00:29:41.626 } 00:29:41.626 Got JSON-RPC error response 00:29:41.626 response: 00:29:41.626 { 00:29:41.626 "code": -32602, 00:29:41.626 "message": "Invalid parameters" 00:29:41.626 } 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:41.626 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:41.626 rmmod nvme_tcp 00:29:41.887 rmmod nvme_fabrics 00:29:41.887 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:41.887 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:29:41.887 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:29:41.887 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3766374 ']' 00:29:41.887 19:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3766374 00:29:41.887 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 3766374 ']' 00:29:41.887 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 3766374 00:29:41.887 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:29:41.887 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:41.887 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3766374 00:29:41.887 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:41.887 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:41.888 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3766374' 00:29:41.888 killing process with pid 3766374 00:29:41.888 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 3766374 00:29:41.888 19:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 3766374 00:29:41.888 19:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:41.888 19:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:41.888 19:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:41.888 19:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:41.888 19:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:41.888 19:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.888 19:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:41.888 19:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.437 19:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:44.437 19:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:44.437 19:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:44.437 19:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:44.437 19:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:44.437 19:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:29:44.437 19:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:44.437 19:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:44.437 19:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:44.437 19:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:44.437 19:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:44.437 19:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:44.437 19:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:48.649 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:48.649 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:48.649 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:48.649 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:48.649 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:48.649 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:48.649 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:48.649 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:48.649 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:48.649 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:48.649 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:48.649 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:48.649 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:48.649 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:48.649 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:48.649 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:48.649 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:48.649 19:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.3xK /tmp/spdk.key-null.rNi /tmp/spdk.key-sha256.3XQ /tmp/spdk.key-sha384.Cmh /tmp/spdk.key-sha512.Qwf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:48.650 19:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:52.857 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:52.857 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:52.857 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:52.857 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:52.857 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:52.857 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:52.857 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:52.857 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:52.857 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:52.857 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:29:52.857 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:52.857 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:52.857 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:52.857 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:52.857 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:52.857 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:52.857 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:52.857 00:29:52.857 real 1m1.291s 00:29:52.857 user 0m54.031s 00:29:52.857 sys 0m17.005s 00:29:52.857 19:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:52.857 19:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.857 ************************************ 00:29:52.857 END TEST nvmf_auth_host 00:29:52.857 ************************************ 00:29:52.857 19:45:18 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:29:52.857 19:45:18 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:52.857 19:45:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:52.857 19:45:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:52.857 19:45:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.857 ************************************ 00:29:52.857 START TEST nvmf_digest 00:29:52.857 ************************************ 00:29:52.857 19:45:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:52.857 * Looking for test storage... 00:29:52.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:52.857 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:53.118 19:45:19 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:53.118 19:45:19 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:53.118 19:45:19 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:53.118 19:45:19 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:53.118 19:45:19 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:53.118 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:53.118 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:53.118 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:53.118 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:53.118 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:53.118 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.118 19:45:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:53.118 19:45:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.118 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:53.118 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:53.118 19:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:29:53.118 19:45:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:01.264 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:01.264 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:01.264 Found net devices under 0000:31:00.0: cvl_0_0 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:01.264 Found net devices under 0000:31:00.1: cvl_0_1 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:01.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:01.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.752 ms 00:30:01.264 00:30:01.264 --- 10.0.0.2 ping statistics --- 00:30:01.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.264 rtt min/avg/max/mdev = 0.752/0.752/0.752/0.000 ms 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:01.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:01.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:30:01.264 00:30:01.264 --- 10.0.0.1 ping statistics --- 00:30:01.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.264 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:01.264 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:01.265 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:01.265 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:01.265 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:01.265 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:01.265 19:45:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:01.265 19:45:27 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:01.265 19:45:27 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:01.265 19:45:27 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:01.265 19:45:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:01.265 19:45:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:01.265 19:45:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:01.526 ************************************ 00:30:01.526 START TEST nvmf_digest_clean 00:30:01.526 ************************************ 00:30:01.526 19:45:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:30:01.526 19:45:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:30:01.526 19:45:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:01.526 19:45:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:30:01.526 19:45:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:01.526 19:45:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:01.526 19:45:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:01.526 19:45:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:01.526 19:45:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:01.526 19:45:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3784675 00:30:01.526 19:45:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3784675 00:30:01.526 19:45:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3784675 ']' 00:30:01.526 19:45:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:01.526 19:45:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.526 19:45:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:01.526 19:45:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.526 19:45:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:01.526 19:45:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:01.526 [2024-05-15 19:45:27.553189] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:30:01.526 [2024-05-15 19:45:27.553248] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:01.526 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.526 [2024-05-15 19:45:27.649465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.787 [2024-05-15 19:45:27.744712] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.787 [2024-05-15 19:45:27.744775] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.787 [2024-05-15 19:45:27.744783] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.787 [2024-05-15 19:45:27.744791] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.787 [2024-05-15 19:45:27.744797] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.787 [2024-05-15 19:45:27.744827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.360 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:02.360 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:30:02.360 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:02.360 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:02.360 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:02.360 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:02.360 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:02.360 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:30:02.360 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:30:02.360 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.360 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:02.622 null0 00:30:02.622 [2024-05-15 19:45:28.578436] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:02.622 [2024-05-15 19:45:28.602412] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:02.622 [2024-05-15 19:45:28.602721] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:02.622 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.622 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:02.622 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:02.622 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:02.622 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:02.622 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:02.622 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:02.622 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:02.622 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3784975 00:30:02.622 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3784975 /var/tmp/bperf.sock 00:30:02.622 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3784975 ']' 00:30:02.622 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:02.622 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:02.622 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:02.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:02.622 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:02.622 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:02.622 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:02.622 [2024-05-15 19:45:28.658267] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:30:02.622 [2024-05-15 19:45:28.658335] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3784975 ] 00:30:02.622 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.622 [2024-05-15 19:45:28.731569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.622 [2024-05-15 19:45:28.803966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.884 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:02.884 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:30:02.884 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:02.884 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:02.884 19:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:03.145 19:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:03.145 19:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:03.406 nvme0n1 00:30:03.406 19:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:03.406 19:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:03.406 Running I/O for 2 seconds... 00:30:05.321 00:30:05.321 Latency(us) 00:30:05.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.321 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:05.321 nvme0n1 : 2.00 20589.71 80.43 0.00 0.00 6208.99 2976.43 16165.55 00:30:05.321 =================================================================================================================== 00:30:05.321 Total : 20589.71 80.43 0.00 0.00 6208.99 2976.43 16165.55 00:30:05.321 0 00:30:05.321 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:05.321 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:05.321 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:05.321 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:05.321 | select(.opcode=="crc32c") 00:30:05.321 | "\(.module_name) \(.executed)"' 00:30:05.321 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:05.582 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:05.582 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:05.582 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:05.582 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:05.582 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3784975 00:30:05.582 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3784975 ']' 00:30:05.582 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3784975 00:30:05.582 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:30:05.582 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:05.582 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3784975 00:30:05.582 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:05.582 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:05.582 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3784975' 00:30:05.582 killing process with pid 3784975 00:30:05.582 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3784975 00:30:05.582 Received shutdown signal, test time was about 2.000000 seconds 00:30:05.583 00:30:05.583 Latency(us) 00:30:05.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.583 =================================================================================================================== 00:30:05.583 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:05.583 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3784975 00:30:05.844 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:30:05.844 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:05.844 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:05.844 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:05.844 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:05.844 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:05.844 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:05.844 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3785654 00:30:05.844 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3785654 /var/tmp/bperf.sock 00:30:05.844 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3785654 ']' 00:30:05.844 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:05.844 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:05.844 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:05.844 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:05.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:05.844 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:05.844 19:45:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:05.844 [2024-05-15 19:45:31.940940] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:30:05.844 [2024-05-15 19:45:31.940997] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3785654 ] 00:30:05.844 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:05.844 Zero copy mechanism will not be used. 00:30:05.844 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.844 [2024-05-15 19:45:32.007974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.105 [2024-05-15 19:45:32.071029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.105 19:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:06.105 19:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:30:06.105 19:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:06.105 19:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:06.105 19:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:06.366 19:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:06.366 19:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:06.627 nvme0n1 00:30:06.627 19:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:06.627 19:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:06.627 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:06.627 Zero copy mechanism will not be used. 00:30:06.627 Running I/O for 2 seconds... 00:30:09.175 00:30:09.175 Latency(us) 00:30:09.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:09.175 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:09.175 nvme0n1 : 2.00 2583.07 322.88 0.00 0.00 6190.40 1529.17 11141.12 00:30:09.175 =================================================================================================================== 00:30:09.175 Total : 2583.07 322.88 0.00 0.00 6190.40 1529.17 11141.12 00:30:09.175 0 00:30:09.175 19:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:09.175 19:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:09.175 19:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:09.175 19:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:09.175 | select(.opcode=="crc32c") 00:30:09.175 | "\(.module_name) \(.executed)"' 00:30:09.175 19:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:09.175 19:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:09.175 19:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:09.175 19:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:09.175 19:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:09.175 19:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3785654 00:30:09.175 19:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3785654 ']' 00:30:09.175 19:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3785654 00:30:09.175 19:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:30:09.175 19:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:09.175 19:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3785654 00:30:09.175 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:09.175 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:09.175 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3785654' 00:30:09.175 killing process with pid 3785654 00:30:09.175 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3785654 00:30:09.175 Received shutdown signal, test time was about 2.000000 seconds 00:30:09.175 00:30:09.175 Latency(us) 00:30:09.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:09.175 =================================================================================================================== 00:30:09.175 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:09.175 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3785654 00:30:09.175 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:09.175 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:09.175 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:09.175 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:09.175 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:09.175 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:09.175 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:09.175 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3786281 00:30:09.175 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3786281 /var/tmp/bperf.sock 00:30:09.175 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3786281 ']' 00:30:09.175 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:09.175 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:09.175 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:09.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:09.175 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:09.175 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:09.175 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:09.175 [2024-05-15 19:45:35.223271] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:30:09.175 [2024-05-15 19:45:35.223351] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3786281 ] 00:30:09.175 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.175 [2024-05-15 19:45:35.288100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.175 [2024-05-15 19:45:35.351295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.436 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:09.436 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:30:09.436 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:09.436 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:09.436 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:09.697 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:09.697 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:09.958 nvme0n1 00:30:09.958 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:09.958 19:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:09.958 Running I/O for 2 seconds... 00:30:12.504 00:30:12.504 Latency(us) 00:30:12.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.504 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:12.504 nvme0n1 : 2.01 21378.41 83.51 0.00 0.00 5974.07 3932.16 10158.08 00:30:12.504 =================================================================================================================== 00:30:12.504 Total : 21378.41 83.51 0.00 0.00 5974.07 3932.16 10158.08 00:30:12.504 0 00:30:12.504 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:12.504 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:12.504 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:12.504 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:12.504 | select(.opcode=="crc32c") 00:30:12.504 | "\(.module_name) \(.executed)"' 00:30:12.504 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:12.504 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:12.504 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:12.504 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:12.504 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:12.504 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3786281 00:30:12.504 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3786281 ']' 00:30:12.504 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3786281 00:30:12.504 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:30:12.504 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:12.504 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3786281 00:30:12.504 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:12.504 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:12.504 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3786281' 00:30:12.504 killing process with pid 3786281 00:30:12.504 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3786281 00:30:12.504 Received shutdown signal, test time was about 2.000000 seconds 00:30:12.504 00:30:12.504 Latency(us) 00:30:12.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.505 =================================================================================================================== 00:30:12.505 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:12.505 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3786281 00:30:12.505 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:12.505 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:12.505 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:12.505 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:12.505 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:12.505 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:12.505 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:12.505 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3786857 00:30:12.505 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3786857 /var/tmp/bperf.sock 00:30:12.505 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3786857 ']' 00:30:12.505 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:12.505 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:12.505 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:12.505 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:12.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:12.505 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:12.505 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:12.505 [2024-05-15 19:45:38.534231] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:30:12.505 [2024-05-15 19:45:38.534288] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3786857 ] 00:30:12.505 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:12.505 Zero copy mechanism will not be used. 00:30:12.505 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.505 [2024-05-15 19:45:38.598084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.505 [2024-05-15 19:45:38.662311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.766 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:12.766 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:30:12.766 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:12.766 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:12.766 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:13.027 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:13.027 19:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:13.288 nvme0n1 00:30:13.288 19:45:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:13.288 19:45:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:13.288 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:13.288 Zero copy mechanism will not be used. 00:30:13.288 Running I/O for 2 seconds... 00:30:15.199 00:30:15.199 Latency(us) 00:30:15.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.199 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:15.199 nvme0n1 : 2.00 3535.51 441.94 0.00 0.00 4517.30 2252.80 19223.89 00:30:15.199 =================================================================================================================== 00:30:15.199 Total : 3535.51 441.94 0.00 0.00 4517.30 2252.80 19223.89 00:30:15.199 0 00:30:15.199 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:15.199 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:15.199 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:15.200 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:15.200 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:15.200 | select(.opcode=="crc32c") 00:30:15.200 | "\(.module_name) \(.executed)"' 00:30:15.459 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:15.459 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:15.459 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:15.459 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:15.459 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3786857 00:30:15.459 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3786857 ']' 00:30:15.459 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3786857 00:30:15.459 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:30:15.459 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:15.459 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3786857 00:30:15.459 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:15.459 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:15.459 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3786857' 00:30:15.459 killing process with pid 3786857 00:30:15.459 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3786857 00:30:15.459 Received shutdown signal, test time was about 2.000000 seconds 00:30:15.459 00:30:15.459 Latency(us) 00:30:15.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.459 =================================================================================================================== 00:30:15.459 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:15.459 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3786857 00:30:15.719 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3784675 00:30:15.719 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3784675 ']' 00:30:15.719 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3784675 00:30:15.719 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:30:15.719 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:15.719 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3784675 00:30:15.719 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:15.719 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:15.719 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3784675' 00:30:15.719 killing process with pid 3784675 00:30:15.719 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3784675 00:30:15.719 [2024-05-15 19:45:41.814689] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:15.719 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3784675 00:30:15.979 00:30:15.979 real 0m14.458s 00:30:15.979 user 0m28.611s 00:30:15.979 sys 0m3.233s 00:30:15.979 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:15.979 19:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:15.979 ************************************ 00:30:15.979 END TEST nvmf_digest_clean 00:30:15.979 ************************************ 00:30:15.979 19:45:41 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:15.979 19:45:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:15.979 19:45:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:15.979 19:45:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:15.979 ************************************ 00:30:15.979 START TEST nvmf_digest_error 00:30:15.979 ************************************ 00:30:15.979 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:30:15.979 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:15.979 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:15.979 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:15.979 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:15.979 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3787563 00:30:15.979 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3787563 00:30:15.979 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3787563 ']' 00:30:15.979 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.979 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:15.979 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.979 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:15.979 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:15.979 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:15.979 [2024-05-15 19:45:42.082084] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:30:15.979 [2024-05-15 19:45:42.082136] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:15.979 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.239 [2024-05-15 19:45:42.174025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.239 [2024-05-15 19:45:42.239664] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.239 [2024-05-15 19:45:42.239700] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.239 [2024-05-15 19:45:42.239707] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.239 [2024-05-15 19:45:42.239713] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.239 [2024-05-15 19:45:42.239719] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.239 [2024-05-15 19:45:42.239740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.809 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:16.809 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:30:16.809 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:16.809 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:16.809 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:16.809 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:16.809 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:16.809 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.809 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:16.809 [2024-05-15 19:45:42.981827] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:16.809 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.809 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:30:16.809 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:30:16.809 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.809 19:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:17.069 null0 00:30:17.069 [2024-05-15 19:45:43.058539] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.069 [2024-05-15 19:45:43.082547] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:17.069 [2024-05-15 19:45:43.082774] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.069 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.069 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:17.069 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:17.069 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:17.069 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:17.069 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:17.069 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3787744 00:30:17.069 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3787744 /var/tmp/bperf.sock 00:30:17.069 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3787744 ']' 00:30:17.069 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:17.069 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:17.069 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:17.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:17.069 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:17.069 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:17.069 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:17.069 [2024-05-15 19:45:43.135816] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:30:17.069 [2024-05-15 19:45:43.135863] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3787744 ] 00:30:17.069 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.069 [2024-05-15 19:45:43.199656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.330 [2024-05-15 19:45:43.263688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.330 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:17.330 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:30:17.330 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:17.330 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:17.590 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:17.590 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.590 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:17.590 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.590 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:17.590 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:17.850 nvme0n1 00:30:17.850 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:17.850 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.850 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:17.851 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.851 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:17.851 19:45:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:17.851 Running I/O for 2 seconds... 00:30:17.851 [2024-05-15 19:45:43.939829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:17.851 [2024-05-15 19:45:43.939866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.851 [2024-05-15 19:45:43.939878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.851 [2024-05-15 19:45:43.956027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:17.851 [2024-05-15 19:45:43.956052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.851 [2024-05-15 19:45:43.956062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.851 [2024-05-15 19:45:43.970764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:17.851 [2024-05-15 19:45:43.970788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.851 [2024-05-15 19:45:43.970797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.851 [2024-05-15 19:45:43.981706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:17.851 [2024-05-15 19:45:43.981728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.851 [2024-05-15 19:45:43.981737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.851 [2024-05-15 19:45:43.994991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:17.851 [2024-05-15 19:45:43.995014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.851 [2024-05-15 19:45:43.995023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.851 [2024-05-15 19:45:44.007746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:17.851 [2024-05-15 19:45:44.007768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.851 [2024-05-15 19:45:44.007783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.851 [2024-05-15 19:45:44.018058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:17.851 [2024-05-15 19:45:44.018081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.851 [2024-05-15 19:45:44.018089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.851 [2024-05-15 19:45:44.031693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:17.851 [2024-05-15 19:45:44.031714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.851 [2024-05-15 19:45:44.031723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.112 [2024-05-15 19:45:44.045111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.112 [2024-05-15 19:45:44.045133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.112 [2024-05-15 19:45:44.045142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.112 [2024-05-15 19:45:44.056667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.112 [2024-05-15 19:45:44.056688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.112 [2024-05-15 19:45:44.056697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.112 [2024-05-15 19:45:44.068923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.112 [2024-05-15 19:45:44.068945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.112 [2024-05-15 19:45:44.068954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.112 [2024-05-15 19:45:44.080645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.112 [2024-05-15 19:45:44.080667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.112 [2024-05-15 19:45:44.080676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.112 [2024-05-15 19:45:44.093997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.112 [2024-05-15 19:45:44.094018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.112 [2024-05-15 19:45:44.094027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.112 [2024-05-15 19:45:44.106151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.112 [2024-05-15 19:45:44.106174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.112 [2024-05-15 19:45:44.106184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.112 [2024-05-15 19:45:44.117725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.112 [2024-05-15 19:45:44.117754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.112 [2024-05-15 19:45:44.117762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.112 [2024-05-15 19:45:44.129685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.112 [2024-05-15 19:45:44.129706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.112 [2024-05-15 19:45:44.129715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.112 [2024-05-15 19:45:44.141129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.112 [2024-05-15 19:45:44.141151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.112 [2024-05-15 19:45:44.141160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.112 [2024-05-15 19:45:44.154737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.112 [2024-05-15 19:45:44.154758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.112 [2024-05-15 19:45:44.154767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.112 [2024-05-15 19:45:44.168817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.112 [2024-05-15 19:45:44.168838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.112 [2024-05-15 19:45:44.168847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.112 [2024-05-15 19:45:44.181115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.112 [2024-05-15 19:45:44.181137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.112 [2024-05-15 19:45:44.181145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.112 [2024-05-15 19:45:44.192664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.112 [2024-05-15 19:45:44.192685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.112 [2024-05-15 19:45:44.192693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.112 [2024-05-15 19:45:44.204948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.112 [2024-05-15 19:45:44.204968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.112 [2024-05-15 19:45:44.204977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.112 [2024-05-15 19:45:44.217845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.112 [2024-05-15 19:45:44.217866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.113 [2024-05-15 19:45:44.217875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.113 [2024-05-15 19:45:44.230028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.113 [2024-05-15 19:45:44.230049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.113 [2024-05-15 19:45:44.230058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.113 [2024-05-15 19:45:44.242871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.113 [2024-05-15 19:45:44.242892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.113 [2024-05-15 19:45:44.242901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.113 [2024-05-15 19:45:44.255552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.113 [2024-05-15 19:45:44.255573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.113 [2024-05-15 19:45:44.255582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.113 [2024-05-15 19:45:44.266179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.113 [2024-05-15 19:45:44.266200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.113 [2024-05-15 19:45:44.266209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.113 [2024-05-15 19:45:44.279655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.113 [2024-05-15 19:45:44.279676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.113 [2024-05-15 19:45:44.279685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.113 [2024-05-15 19:45:44.291915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.113 [2024-05-15 19:45:44.291936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.113 [2024-05-15 19:45:44.291945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.373 [2024-05-15 19:45:44.304971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.373 [2024-05-15 19:45:44.304993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.374 [2024-05-15 19:45:44.305002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.374 [2024-05-15 19:45:44.315145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.374 [2024-05-15 19:45:44.315166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.374 [2024-05-15 19:45:44.315175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.374 [2024-05-15 19:45:44.328516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.374 [2024-05-15 19:45:44.328544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.374 [2024-05-15 19:45:44.328557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.374 [2024-05-15 19:45:44.343215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.374 [2024-05-15 19:45:44.343237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.374 [2024-05-15 19:45:44.343246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.374 [2024-05-15 19:45:44.355359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.374 [2024-05-15 19:45:44.355380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.374 [2024-05-15 19:45:44.355389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.374 [2024-05-15 19:45:44.367096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.374 [2024-05-15 19:45:44.367118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.374 [2024-05-15 19:45:44.367126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.374 [2024-05-15 19:45:44.382095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.374 [2024-05-15 19:45:44.382117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.374 [2024-05-15 19:45:44.382126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.374 [2024-05-15 19:45:44.394093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.374 [2024-05-15 19:45:44.394114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.374 [2024-05-15 19:45:44.394123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.374 [2024-05-15 19:45:44.408756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.374 [2024-05-15 19:45:44.408778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.374 [2024-05-15 19:45:44.408787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.374 [2024-05-15 19:45:44.421455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.374 [2024-05-15 19:45:44.421476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.374 [2024-05-15 19:45:44.421485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.374 [2024-05-15 19:45:44.433791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.374 [2024-05-15 19:45:44.433813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.374 [2024-05-15 19:45:44.433822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.374 [2024-05-15 19:45:44.446126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.374 [2024-05-15 19:45:44.446151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.374 [2024-05-15 19:45:44.446160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.374 [2024-05-15 19:45:44.457737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.374 [2024-05-15 19:45:44.457758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.374 [2024-05-15 19:45:44.457767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.374 [2024-05-15 19:45:44.470871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.374 [2024-05-15 19:45:44.470892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.374 [2024-05-15 19:45:44.470901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.374 [2024-05-15 19:45:44.482900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.374 [2024-05-15 19:45:44.482922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.374 [2024-05-15 19:45:44.482932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.374 [2024-05-15 19:45:44.495739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.374 [2024-05-15 19:45:44.495760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.374 [2024-05-15 19:45:44.495769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.374 [2024-05-15 19:45:44.507780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.374 [2024-05-15 19:45:44.507801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.374 [2024-05-15 19:45:44.507810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.374 [2024-05-15 19:45:44.521450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.374 [2024-05-15 19:45:44.521472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.374 [2024-05-15 19:45:44.521480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.374 [2024-05-15 19:45:44.532269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.374 [2024-05-15 19:45:44.532290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.374 [2024-05-15 19:45:44.532299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.374 [2024-05-15 19:45:44.546075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.374 [2024-05-15 19:45:44.546097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.374 [2024-05-15 19:45:44.546110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.636 [2024-05-15 19:45:44.558178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.636 [2024-05-15 19:45:44.558200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.636 [2024-05-15 19:45:44.558211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.636 [2024-05-15 19:45:44.570608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.636 [2024-05-15 19:45:44.570629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.636 [2024-05-15 19:45:44.570638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.636 [2024-05-15 19:45:44.582666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.636 [2024-05-15 19:45:44.582687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.636 [2024-05-15 19:45:44.582696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.636 [2024-05-15 19:45:44.594292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.636 [2024-05-15 19:45:44.594319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.636 [2024-05-15 19:45:44.594328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.636 [2024-05-15 19:45:44.606339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.636 [2024-05-15 19:45:44.606360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.636 [2024-05-15 19:45:44.606369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.636 [2024-05-15 19:45:44.618646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.636 [2024-05-15 19:45:44.618667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.636 [2024-05-15 19:45:44.618675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.636 [2024-05-15 19:45:44.631119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.636 [2024-05-15 19:45:44.631140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.636 [2024-05-15 19:45:44.631149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.636 [2024-05-15 19:45:44.643329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.636 [2024-05-15 19:45:44.643351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.636 [2024-05-15 19:45:44.643359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.636 [2024-05-15 19:45:44.657035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.636 [2024-05-15 19:45:44.657060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.636 [2024-05-15 19:45:44.657069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.636 [2024-05-15 19:45:44.668532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.636 [2024-05-15 19:45:44.668553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.636 [2024-05-15 19:45:44.668562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.636 [2024-05-15 19:45:44.681661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.636 [2024-05-15 19:45:44.681682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.636 [2024-05-15 19:45:44.681691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.636 [2024-05-15 19:45:44.694430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.636 [2024-05-15 19:45:44.694452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.636 [2024-05-15 19:45:44.694460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.636 [2024-05-15 19:45:44.706897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.636 [2024-05-15 19:45:44.706919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.636 [2024-05-15 19:45:44.706928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.636 [2024-05-15 19:45:44.719373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.636 [2024-05-15 19:45:44.719396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.636 [2024-05-15 19:45:44.719404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.637 [2024-05-15 19:45:44.730131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.637 [2024-05-15 19:45:44.730152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.637 [2024-05-15 19:45:44.730161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.637 [2024-05-15 19:45:44.744062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.637 [2024-05-15 19:45:44.744083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.637 [2024-05-15 19:45:44.744092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.637 [2024-05-15 19:45:44.756698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.637 [2024-05-15 19:45:44.756719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.637 [2024-05-15 19:45:44.756727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.637 [2024-05-15 19:45:44.768944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.637 [2024-05-15 19:45:44.768965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.637 [2024-05-15 19:45:44.768974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.637 [2024-05-15 19:45:44.781042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.637 [2024-05-15 19:45:44.781062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.637 [2024-05-15 19:45:44.781072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.637 [2024-05-15 19:45:44.794685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.637 [2024-05-15 19:45:44.794706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.637 [2024-05-15 19:45:44.794715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.637 [2024-05-15 19:45:44.805173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.637 [2024-05-15 19:45:44.805194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.637 [2024-05-15 19:45:44.805203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.637 [2024-05-15 19:45:44.818643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.637 [2024-05-15 19:45:44.818663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.637 [2024-05-15 19:45:44.818672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.898 [2024-05-15 19:45:44.832021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.898 [2024-05-15 19:45:44.832043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.899 [2024-05-15 19:45:44.832051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.899 [2024-05-15 19:45:44.844421] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.899 [2024-05-15 19:45:44.844442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.899 [2024-05-15 19:45:44.844451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.899 [2024-05-15 19:45:44.855525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.899 [2024-05-15 19:45:44.855546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.899 [2024-05-15 19:45:44.855555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.899 [2024-05-15 19:45:44.870148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.899 [2024-05-15 19:45:44.870169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.899 [2024-05-15 19:45:44.870182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.899 [2024-05-15 19:45:44.881421] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.899 [2024-05-15 19:45:44.881442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.899 [2024-05-15 19:45:44.881451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.899 [2024-05-15 19:45:44.894433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.899 [2024-05-15 19:45:44.894454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.899 [2024-05-15 19:45:44.894463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.899 [2024-05-15 19:45:44.907296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.899 [2024-05-15 19:45:44.907324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.899 [2024-05-15 19:45:44.907334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.899 [2024-05-15 19:45:44.920448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.899 [2024-05-15 19:45:44.920469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.899 [2024-05-15 19:45:44.920478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.899 [2024-05-15 19:45:44.932002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.899 [2024-05-15 19:45:44.932024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.899 [2024-05-15 19:45:44.932033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.899 [2024-05-15 19:45:44.944443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.899 [2024-05-15 19:45:44.944465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.899 [2024-05-15 19:45:44.944474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.899 [2024-05-15 19:45:44.957675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.899 [2024-05-15 19:45:44.957696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.899 [2024-05-15 19:45:44.957705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.899 [2024-05-15 19:45:44.968460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.899 [2024-05-15 19:45:44.968481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.899 [2024-05-15 19:45:44.968490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.899 [2024-05-15 19:45:44.981303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.899 [2024-05-15 19:45:44.981333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.899 [2024-05-15 19:45:44.981342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.899 [2024-05-15 19:45:44.994524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.899 [2024-05-15 19:45:44.994546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.899 [2024-05-15 19:45:44.994555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.899 [2024-05-15 19:45:45.005990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.899 [2024-05-15 19:45:45.006011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.899 [2024-05-15 19:45:45.006020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.899 [2024-05-15 19:45:45.018525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.899 [2024-05-15 19:45:45.018546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.899 [2024-05-15 19:45:45.018555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.899 [2024-05-15 19:45:45.031301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.899 [2024-05-15 19:45:45.031327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.899 [2024-05-15 19:45:45.031336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.899 [2024-05-15 19:45:45.043184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.899 [2024-05-15 19:45:45.043205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.899 [2024-05-15 19:45:45.043214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.899 [2024-05-15 19:45:45.055856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.899 [2024-05-15 19:45:45.055878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.899 [2024-05-15 19:45:45.055886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.899 [2024-05-15 19:45:45.068216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.899 [2024-05-15 19:45:45.068237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.899 [2024-05-15 19:45:45.068246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.899 [2024-05-15 19:45:45.079112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:18.899 [2024-05-15 19:45:45.079133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.899 [2024-05-15 19:45:45.079142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.161 [2024-05-15 19:45:45.092293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.161 [2024-05-15 19:45:45.092319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.161 [2024-05-15 19:45:45.092328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.161 [2024-05-15 19:45:45.105708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.161 [2024-05-15 19:45:45.105729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.161 [2024-05-15 19:45:45.105739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.161 [2024-05-15 19:45:45.117675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.161 [2024-05-15 19:45:45.117696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.161 [2024-05-15 19:45:45.117706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.161 [2024-05-15 19:45:45.130681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.161 [2024-05-15 19:45:45.130703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.161 [2024-05-15 19:45:45.130712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.161 [2024-05-15 19:45:45.142839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.161 [2024-05-15 19:45:45.142860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.161 [2024-05-15 19:45:45.142868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.161 [2024-05-15 19:45:45.154285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.161 [2024-05-15 19:45:45.154306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.161 [2024-05-15 19:45:45.154321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.161 [2024-05-15 19:45:45.166129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.161 [2024-05-15 19:45:45.166151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.161 [2024-05-15 19:45:45.166161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.161 [2024-05-15 19:45:45.179153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.161 [2024-05-15 19:45:45.179174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.161 [2024-05-15 19:45:45.179183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.161 [2024-05-15 19:45:45.192773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.161 [2024-05-15 19:45:45.192795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.161 [2024-05-15 19:45:45.192808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.161 [2024-05-15 19:45:45.206478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.161 [2024-05-15 19:45:45.206500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.161 [2024-05-15 19:45:45.206508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.161 [2024-05-15 19:45:45.217019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.161 [2024-05-15 19:45:45.217040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.161 [2024-05-15 19:45:45.217049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.161 [2024-05-15 19:45:45.229829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.161 [2024-05-15 19:45:45.229851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.161 [2024-05-15 19:45:45.229860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.161 [2024-05-15 19:45:45.242735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.161 [2024-05-15 19:45:45.242756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.161 [2024-05-15 19:45:45.242764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.161 [2024-05-15 19:45:45.254537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.161 [2024-05-15 19:45:45.254558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.161 [2024-05-15 19:45:45.254567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.161 [2024-05-15 19:45:45.266446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.161 [2024-05-15 19:45:45.266467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.161 [2024-05-15 19:45:45.266476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.162 [2024-05-15 19:45:45.279829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.162 [2024-05-15 19:45:45.279850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.162 [2024-05-15 19:45:45.279858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.162 [2024-05-15 19:45:45.291869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.162 [2024-05-15 19:45:45.291890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.162 [2024-05-15 19:45:45.291899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.162 [2024-05-15 19:45:45.303643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.162 [2024-05-15 19:45:45.303665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.162 [2024-05-15 19:45:45.303674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.162 [2024-05-15 19:45:45.315755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.162 [2024-05-15 19:45:45.315776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.162 [2024-05-15 19:45:45.315785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.162 [2024-05-15 19:45:45.328694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.162 [2024-05-15 19:45:45.328715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.162 [2024-05-15 19:45:45.328724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.162 [2024-05-15 19:45:45.341449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.162 [2024-05-15 19:45:45.341471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.162 [2024-05-15 19:45:45.341480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.424 [2024-05-15 19:45:45.355672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.424 [2024-05-15 19:45:45.355694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.424 [2024-05-15 19:45:45.355703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.424 [2024-05-15 19:45:45.368046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.424 [2024-05-15 19:45:45.368067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.424 [2024-05-15 19:45:45.368076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.424 [2024-05-15 19:45:45.381007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.424 [2024-05-15 19:45:45.381028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.424 [2024-05-15 19:45:45.381037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.424 [2024-05-15 19:45:45.396032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.424 [2024-05-15 19:45:45.396053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.424 [2024-05-15 19:45:45.396062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.424 [2024-05-15 19:45:45.408160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.424 [2024-05-15 19:45:45.408181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.424 [2024-05-15 19:45:45.408194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.424 [2024-05-15 19:45:45.418825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.424 [2024-05-15 19:45:45.418847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.424 [2024-05-15 19:45:45.418856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.424 [2024-05-15 19:45:45.431726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.424 [2024-05-15 19:45:45.431747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.424 [2024-05-15 19:45:45.431756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.424 [2024-05-15 19:45:45.444422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.424 [2024-05-15 19:45:45.444444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.424 [2024-05-15 19:45:45.444453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.424 [2024-05-15 19:45:45.456963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.424 [2024-05-15 19:45:45.456984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.424 [2024-05-15 19:45:45.456993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.424 [2024-05-15 19:45:45.468478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.424 [2024-05-15 19:45:45.468500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.424 [2024-05-15 19:45:45.468509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.424 [2024-05-15 19:45:45.480855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.424 [2024-05-15 19:45:45.480875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.424 [2024-05-15 19:45:45.480884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.424 [2024-05-15 19:45:45.492731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.424 [2024-05-15 19:45:45.492753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.424 [2024-05-15 19:45:45.492762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.424 [2024-05-15 19:45:45.504784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.424 [2024-05-15 19:45:45.504806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.424 [2024-05-15 19:45:45.504815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.424 [2024-05-15 19:45:45.517285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.424 [2024-05-15 19:45:45.517311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.424 [2024-05-15 19:45:45.517326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.424 [2024-05-15 19:45:45.530071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.424 [2024-05-15 19:45:45.530092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.424 [2024-05-15 19:45:45.530101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.424 [2024-05-15 19:45:45.540901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.424 [2024-05-15 19:45:45.540923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.424 [2024-05-15 19:45:45.540932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.424 [2024-05-15 19:45:45.554037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.424 [2024-05-15 19:45:45.554058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.424 [2024-05-15 19:45:45.554068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.424 [2024-05-15 19:45:45.566502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.424 [2024-05-15 19:45:45.566524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.424 [2024-05-15 19:45:45.566532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.424 [2024-05-15 19:45:45.578184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.424 [2024-05-15 19:45:45.578206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.424 [2024-05-15 19:45:45.578216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.424 [2024-05-15 19:45:45.591053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.424 [2024-05-15 19:45:45.591075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.424 [2024-05-15 19:45:45.591084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.424 [2024-05-15 19:45:45.602045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.424 [2024-05-15 19:45:45.602067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.424 [2024-05-15 19:45:45.602076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.686 [2024-05-15 19:45:45.615326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.686 [2024-05-15 19:45:45.615347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.686 [2024-05-15 19:45:45.615356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.686 [2024-05-15 19:45:45.628150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.686 [2024-05-15 19:45:45.628171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.686 [2024-05-15 19:45:45.628180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.686 [2024-05-15 19:45:45.642113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.686 [2024-05-15 19:45:45.642134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.686 [2024-05-15 19:45:45.642142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.686 [2024-05-15 19:45:45.653296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.686 [2024-05-15 19:45:45.653324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.686 [2024-05-15 19:45:45.653333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.686 [2024-05-15 19:45:45.666622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.686 [2024-05-15 19:45:45.666643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.686 [2024-05-15 19:45:45.666652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.686 [2024-05-15 19:45:45.678862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.686 [2024-05-15 19:45:45.678884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.686 [2024-05-15 19:45:45.678895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.686 [2024-05-15 19:45:45.690129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.686 [2024-05-15 19:45:45.690150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.686 [2024-05-15 19:45:45.690159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.686 [2024-05-15 19:45:45.703598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.686 [2024-05-15 19:45:45.703620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.686 [2024-05-15 19:45:45.703629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.686 [2024-05-15 19:45:45.714331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.686 [2024-05-15 19:45:45.714352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.686 [2024-05-15 19:45:45.714361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.686 [2024-05-15 19:45:45.726667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.686 [2024-05-15 19:45:45.726689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.686 [2024-05-15 19:45:45.726704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.686 [2024-05-15 19:45:45.740606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.687 [2024-05-15 19:45:45.740627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.687 [2024-05-15 19:45:45.740636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.687 [2024-05-15 19:45:45.752455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.687 [2024-05-15 19:45:45.752476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.687 [2024-05-15 19:45:45.752485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.687 [2024-05-15 19:45:45.764731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.687 [2024-05-15 19:45:45.764752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.687 [2024-05-15 19:45:45.764761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.687 [2024-05-15 19:45:45.776406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.687 [2024-05-15 19:45:45.776427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.687 [2024-05-15 19:45:45.776437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.687 [2024-05-15 19:45:45.789402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.687 [2024-05-15 19:45:45.789424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.687 [2024-05-15 19:45:45.789432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.687 [2024-05-15 19:45:45.800335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.687 [2024-05-15 19:45:45.800356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.687 [2024-05-15 19:45:45.800365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.687 [2024-05-15 19:45:45.813068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.687 [2024-05-15 19:45:45.813090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.687 [2024-05-15 19:45:45.813099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.687 [2024-05-15 19:45:45.826966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.687 [2024-05-15 19:45:45.826987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.687 [2024-05-15 19:45:45.826996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.687 [2024-05-15 19:45:45.839786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.687 [2024-05-15 19:45:45.839811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.687 [2024-05-15 19:45:45.839820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.687 [2024-05-15 19:45:45.851120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.687 [2024-05-15 19:45:45.851143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.687 [2024-05-15 19:45:45.851152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.687 [2024-05-15 19:45:45.865044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.687 [2024-05-15 19:45:45.865066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.687 [2024-05-15 19:45:45.865074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.948 [2024-05-15 19:45:45.877486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.948 [2024-05-15 19:45:45.877508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.948 [2024-05-15 19:45:45.877516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.948 [2024-05-15 19:45:45.888108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.948 [2024-05-15 19:45:45.888129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.948 [2024-05-15 19:45:45.888138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.948 [2024-05-15 19:45:45.901762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.948 [2024-05-15 19:45:45.901784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.948 [2024-05-15 19:45:45.901793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.948 [2024-05-15 19:45:45.915042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.949 [2024-05-15 19:45:45.915063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.949 [2024-05-15 19:45:45.915072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.949 [2024-05-15 19:45:45.925280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x94f1c0) 00:30:19.949 [2024-05-15 19:45:45.925302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.949 [2024-05-15 19:45:45.925311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:19.949 00:30:19.949 Latency(us) 00:30:19.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.949 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:19.949 nvme0n1 : 2.01 20412.46 79.74 0.00 0.00 6263.36 3181.23 16820.91 00:30:19.949 =================================================================================================================== 00:30:19.949 Total : 20412.46 79.74 0.00 0.00 6263.36 3181.23 16820.91 00:30:19.949 0 00:30:19.949 19:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:19.949 19:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:19.949 19:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:19.949 | .driver_specific 00:30:19.949 | .nvme_error 00:30:19.949 | .status_code 00:30:19.949 | .command_transient_transport_error' 00:30:19.949 19:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 160 > 0 )) 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3787744 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3787744 ']' 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3787744 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3787744 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3787744' 00:30:20.210 killing process with pid 3787744 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3787744 00:30:20.210 Received shutdown signal, test time was about 2.000000 seconds 00:30:20.210 00:30:20.210 Latency(us) 00:30:20.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.210 =================================================================================================================== 00:30:20.210 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3787744 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3788420 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3788420 /var/tmp/bperf.sock 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3788420 ']' 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:20.210 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:20.211 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:20.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:20.211 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:20.211 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:20.211 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:20.211 [2024-05-15 19:45:46.395205] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:30:20.471 [2024-05-15 19:45:46.395261] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3788420 ] 00:30:20.471 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:20.471 Zero copy mechanism will not be used. 00:30:20.471 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.471 [2024-05-15 19:45:46.459230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.471 [2024-05-15 19:45:46.522769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.471 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:20.471 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:30:20.471 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:20.471 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:20.731 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:20.731 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.731 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:20.731 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.731 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:20.731 19:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:21.304 nvme0n1 00:30:21.304 19:45:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:21.304 19:45:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.304 19:45:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:21.304 19:45:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.304 19:45:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:21.304 19:45:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:21.304 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:21.304 Zero copy mechanism will not be used. 00:30:21.304 Running I/O for 2 seconds... 00:30:21.304 [2024-05-15 19:45:47.375125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.304 [2024-05-15 19:45:47.375164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.304 [2024-05-15 19:45:47.375175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.304 [2024-05-15 19:45:47.388844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.304 [2024-05-15 19:45:47.388871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.304 [2024-05-15 19:45:47.388885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.304 [2024-05-15 19:45:47.401468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.304 [2024-05-15 19:45:47.401492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.304 [2024-05-15 19:45:47.401501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.304 [2024-05-15 19:45:47.415357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.304 [2024-05-15 19:45:47.415381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.304 [2024-05-15 19:45:47.415390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.304 [2024-05-15 19:45:47.427648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.304 [2024-05-15 19:45:47.427671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.304 [2024-05-15 19:45:47.427680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.304 [2024-05-15 19:45:47.440814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.304 [2024-05-15 19:45:47.440837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.304 [2024-05-15 19:45:47.440847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.304 [2024-05-15 19:45:47.452940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.304 [2024-05-15 19:45:47.452963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.304 [2024-05-15 19:45:47.452972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.304 [2024-05-15 19:45:47.465271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.304 [2024-05-15 19:45:47.465294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.304 [2024-05-15 19:45:47.465303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.304 [2024-05-15 19:45:47.480349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.304 [2024-05-15 19:45:47.480372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.304 [2024-05-15 19:45:47.480381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.566 [2024-05-15 19:45:47.493137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.567 [2024-05-15 19:45:47.493160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.567 [2024-05-15 19:45:47.493169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.567 [2024-05-15 19:45:47.507092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.567 [2024-05-15 19:45:47.507120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.567 [2024-05-15 19:45:47.507129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.567 [2024-05-15 19:45:47.519481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.567 [2024-05-15 19:45:47.519502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.567 [2024-05-15 19:45:47.519511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.567 [2024-05-15 19:45:47.533078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.567 [2024-05-15 19:45:47.533102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.567 [2024-05-15 19:45:47.533111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.567 [2024-05-15 19:45:47.546065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.567 [2024-05-15 19:45:47.546087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.567 [2024-05-15 19:45:47.546096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.567 [2024-05-15 19:45:47.559600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.567 [2024-05-15 19:45:47.559622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.567 [2024-05-15 19:45:47.559631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.567 [2024-05-15 19:45:47.573011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.567 [2024-05-15 19:45:47.573033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.567 [2024-05-15 19:45:47.573042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.567 [2024-05-15 19:45:47.586380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.567 [2024-05-15 19:45:47.586401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.567 [2024-05-15 19:45:47.586410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.567 [2024-05-15 19:45:47.600066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.567 [2024-05-15 19:45:47.600088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.567 [2024-05-15 19:45:47.600097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.567 [2024-05-15 19:45:47.613878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.567 [2024-05-15 19:45:47.613900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.567 [2024-05-15 19:45:47.613909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.567 [2024-05-15 19:45:47.627537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.567 [2024-05-15 19:45:47.627559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.567 [2024-05-15 19:45:47.627568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.567 [2024-05-15 19:45:47.640084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.567 [2024-05-15 19:45:47.640106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.567 [2024-05-15 19:45:47.640114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.567 [2024-05-15 19:45:47.653287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.567 [2024-05-15 19:45:47.653308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.567 [2024-05-15 19:45:47.653322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.567 [2024-05-15 19:45:47.667659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.567 [2024-05-15 19:45:47.667680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.567 [2024-05-15 19:45:47.667689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.567 [2024-05-15 19:45:47.681583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.567 [2024-05-15 19:45:47.681605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.567 [2024-05-15 19:45:47.681614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.567 [2024-05-15 19:45:47.694410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.567 [2024-05-15 19:45:47.694431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.567 [2024-05-15 19:45:47.694440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.567 [2024-05-15 19:45:47.707351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.567 [2024-05-15 19:45:47.707373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.567 [2024-05-15 19:45:47.707381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.567 [2024-05-15 19:45:47.721228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.567 [2024-05-15 19:45:47.721249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.567 [2024-05-15 19:45:47.721258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.567 [2024-05-15 19:45:47.733678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.567 [2024-05-15 19:45:47.733699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.567 [2024-05-15 19:45:47.733712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.567 [2024-05-15 19:45:47.745477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.567 [2024-05-15 19:45:47.745498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.567 [2024-05-15 19:45:47.745507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.828 [2024-05-15 19:45:47.758772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.828 [2024-05-15 19:45:47.758795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.828 [2024-05-15 19:45:47.758803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.828 [2024-05-15 19:45:47.772288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.828 [2024-05-15 19:45:47.772310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.828 [2024-05-15 19:45:47.772324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.828 [2024-05-15 19:45:47.784537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.828 [2024-05-15 19:45:47.784559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.828 [2024-05-15 19:45:47.784568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.828 [2024-05-15 19:45:47.797508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.828 [2024-05-15 19:45:47.797530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.828 [2024-05-15 19:45:47.797539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.828 [2024-05-15 19:45:47.810805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.828 [2024-05-15 19:45:47.810827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.828 [2024-05-15 19:45:47.810836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.828 [2024-05-15 19:45:47.822696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.828 [2024-05-15 19:45:47.822717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.828 [2024-05-15 19:45:47.822726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.828 [2024-05-15 19:45:47.834151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.828 [2024-05-15 19:45:47.834172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.828 [2024-05-15 19:45:47.834181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.828 [2024-05-15 19:45:47.847001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.829 [2024-05-15 19:45:47.847024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.829 [2024-05-15 19:45:47.847032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.829 [2024-05-15 19:45:47.859302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.829 [2024-05-15 19:45:47.859329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.829 [2024-05-15 19:45:47.859338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.829 [2024-05-15 19:45:47.873917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.829 [2024-05-15 19:45:47.873939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.829 [2024-05-15 19:45:47.873948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.829 [2024-05-15 19:45:47.886594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.829 [2024-05-15 19:45:47.886615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.829 [2024-05-15 19:45:47.886625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.829 [2024-05-15 19:45:47.899847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.829 [2024-05-15 19:45:47.899868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.829 [2024-05-15 19:45:47.899877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.829 [2024-05-15 19:45:47.913243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.829 [2024-05-15 19:45:47.913265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.829 [2024-05-15 19:45:47.913274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.829 [2024-05-15 19:45:47.926673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.829 [2024-05-15 19:45:47.926694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.829 [2024-05-15 19:45:47.926703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.829 [2024-05-15 19:45:47.940545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.829 [2024-05-15 19:45:47.940567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.829 [2024-05-15 19:45:47.940576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.829 [2024-05-15 19:45:47.953082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.829 [2024-05-15 19:45:47.953104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.829 [2024-05-15 19:45:47.953116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.829 [2024-05-15 19:45:47.967319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.829 [2024-05-15 19:45:47.967340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.829 [2024-05-15 19:45:47.967350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.829 [2024-05-15 19:45:47.977895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.829 [2024-05-15 19:45:47.977917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.829 [2024-05-15 19:45:47.977926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.829 [2024-05-15 19:45:47.990077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.829 [2024-05-15 19:45:47.990100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.829 [2024-05-15 19:45:47.990109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.829 [2024-05-15 19:45:48.002789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:21.829 [2024-05-15 19:45:48.002812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.829 [2024-05-15 19:45:48.002821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.090 [2024-05-15 19:45:48.014716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.090 [2024-05-15 19:45:48.014739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.090 [2024-05-15 19:45:48.014749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.090 [2024-05-15 19:45:48.027399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.090 [2024-05-15 19:45:48.027422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.090 [2024-05-15 19:45:48.027431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.090 [2024-05-15 19:45:48.040219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.090 [2024-05-15 19:45:48.040241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.090 [2024-05-15 19:45:48.040250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.090 [2024-05-15 19:45:48.053190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.090 [2024-05-15 19:45:48.053212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.090 [2024-05-15 19:45:48.053221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.090 [2024-05-15 19:45:48.065992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.090 [2024-05-15 19:45:48.066018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.090 [2024-05-15 19:45:48.066027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.090 [2024-05-15 19:45:48.078955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.090 [2024-05-15 19:45:48.078977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.091 [2024-05-15 19:45:48.078986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.091 [2024-05-15 19:45:48.093538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.091 [2024-05-15 19:45:48.093560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.091 [2024-05-15 19:45:48.093569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.091 [2024-05-15 19:45:48.107020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.091 [2024-05-15 19:45:48.107043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.091 [2024-05-15 19:45:48.107051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.091 [2024-05-15 19:45:48.119419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.091 [2024-05-15 19:45:48.119441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.091 [2024-05-15 19:45:48.119450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.091 [2024-05-15 19:45:48.130552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.091 [2024-05-15 19:45:48.130574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.091 [2024-05-15 19:45:48.130583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.091 [2024-05-15 19:45:48.143395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.091 [2024-05-15 19:45:48.143417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.091 [2024-05-15 19:45:48.143425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.091 [2024-05-15 19:45:48.155824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.091 [2024-05-15 19:45:48.155846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.091 [2024-05-15 19:45:48.155854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.091 [2024-05-15 19:45:48.168195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.091 [2024-05-15 19:45:48.168217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.091 [2024-05-15 19:45:48.168226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.091 [2024-05-15 19:45:48.180425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.091 [2024-05-15 19:45:48.180447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.091 [2024-05-15 19:45:48.180456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.091 [2024-05-15 19:45:48.194700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.091 [2024-05-15 19:45:48.194722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.091 [2024-05-15 19:45:48.194731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.091 [2024-05-15 19:45:48.208746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.091 [2024-05-15 19:45:48.208769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.091 [2024-05-15 19:45:48.208778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.091 [2024-05-15 19:45:48.222144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.091 [2024-05-15 19:45:48.222165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.091 [2024-05-15 19:45:48.222174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.091 [2024-05-15 19:45:48.235055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.091 [2024-05-15 19:45:48.235077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.091 [2024-05-15 19:45:48.235086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.091 [2024-05-15 19:45:48.248137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.091 [2024-05-15 19:45:48.248159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.091 [2024-05-15 19:45:48.248168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.091 [2024-05-15 19:45:48.260946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.091 [2024-05-15 19:45:48.260968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.091 [2024-05-15 19:45:48.260977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.091 [2024-05-15 19:45:48.272779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.091 [2024-05-15 19:45:48.272801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.091 [2024-05-15 19:45:48.272810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.352 [2024-05-15 19:45:48.285060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.352 [2024-05-15 19:45:48.285083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.352 [2024-05-15 19:45:48.285096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.352 [2024-05-15 19:45:48.297944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.352 [2024-05-15 19:45:48.297966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.352 [2024-05-15 19:45:48.297974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.352 [2024-05-15 19:45:48.310799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.352 [2024-05-15 19:45:48.310822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.353 [2024-05-15 19:45:48.310830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.353 [2024-05-15 19:45:48.323739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.353 [2024-05-15 19:45:48.323760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.353 [2024-05-15 19:45:48.323769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.353 [2024-05-15 19:45:48.337754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.353 [2024-05-15 19:45:48.337776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.353 [2024-05-15 19:45:48.337785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.353 [2024-05-15 19:45:48.351715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.353 [2024-05-15 19:45:48.351738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.353 [2024-05-15 19:45:48.351746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.353 [2024-05-15 19:45:48.365693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.353 [2024-05-15 19:45:48.365715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.353 [2024-05-15 19:45:48.365723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.353 [2024-05-15 19:45:48.380060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.353 [2024-05-15 19:45:48.380083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.353 [2024-05-15 19:45:48.380091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.353 [2024-05-15 19:45:48.394339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.353 [2024-05-15 19:45:48.394361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.353 [2024-05-15 19:45:48.394370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.353 [2024-05-15 19:45:48.408648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.353 [2024-05-15 19:45:48.408674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.353 [2024-05-15 19:45:48.408683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.353 [2024-05-15 19:45:48.423194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.353 [2024-05-15 19:45:48.423216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.353 [2024-05-15 19:45:48.423225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.353 [2024-05-15 19:45:48.437339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.353 [2024-05-15 19:45:48.437361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.353 [2024-05-15 19:45:48.437369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.353 [2024-05-15 19:45:48.451939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.353 [2024-05-15 19:45:48.451962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.353 [2024-05-15 19:45:48.451971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.353 [2024-05-15 19:45:48.467142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.353 [2024-05-15 19:45:48.467164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.353 [2024-05-15 19:45:48.467173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.353 [2024-05-15 19:45:48.480935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.353 [2024-05-15 19:45:48.480956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.353 [2024-05-15 19:45:48.480965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.353 [2024-05-15 19:45:48.495600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.353 [2024-05-15 19:45:48.495623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.353 [2024-05-15 19:45:48.495631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.353 [2024-05-15 19:45:48.508764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.353 [2024-05-15 19:45:48.508786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.353 [2024-05-15 19:45:48.508795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.353 [2024-05-15 19:45:48.521385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.353 [2024-05-15 19:45:48.521407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.353 [2024-05-15 19:45:48.521416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.353 [2024-05-15 19:45:48.533875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.353 [2024-05-15 19:45:48.533898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.353 [2024-05-15 19:45:48.533906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.614 [2024-05-15 19:45:48.546267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.614 [2024-05-15 19:45:48.546290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.614 [2024-05-15 19:45:48.546299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.614 [2024-05-15 19:45:48.558590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.614 [2024-05-15 19:45:48.558612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.614 [2024-05-15 19:45:48.558621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.614 [2024-05-15 19:45:48.570970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.614 [2024-05-15 19:45:48.570992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.614 [2024-05-15 19:45:48.571001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.614 [2024-05-15 19:45:48.584500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.614 [2024-05-15 19:45:48.584522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.614 [2024-05-15 19:45:48.584530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.614 [2024-05-15 19:45:48.598130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.614 [2024-05-15 19:45:48.598152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.614 [2024-05-15 19:45:48.598161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.614 [2024-05-15 19:45:48.610090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.614 [2024-05-15 19:45:48.610112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.614 [2024-05-15 19:45:48.610121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.614 [2024-05-15 19:45:48.622151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.614 [2024-05-15 19:45:48.622173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.614 [2024-05-15 19:45:48.622182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.614 [2024-05-15 19:45:48.634817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.614 [2024-05-15 19:45:48.634839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.614 [2024-05-15 19:45:48.634853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.614 [2024-05-15 19:45:48.647111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.614 [2024-05-15 19:45:48.647134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.614 [2024-05-15 19:45:48.647142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.615 [2024-05-15 19:45:48.658896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.615 [2024-05-15 19:45:48.658918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.615 [2024-05-15 19:45:48.658927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.615 [2024-05-15 19:45:48.671454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.615 [2024-05-15 19:45:48.671476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.615 [2024-05-15 19:45:48.671485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.615 [2024-05-15 19:45:48.684165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.615 [2024-05-15 19:45:48.684187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.615 [2024-05-15 19:45:48.684195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.615 [2024-05-15 19:45:48.696290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.615 [2024-05-15 19:45:48.696317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.615 [2024-05-15 19:45:48.696326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.615 [2024-05-15 19:45:48.710188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.615 [2024-05-15 19:45:48.710210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.615 [2024-05-15 19:45:48.710218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.615 [2024-05-15 19:45:48.723231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.615 [2024-05-15 19:45:48.723253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.615 [2024-05-15 19:45:48.723262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.615 [2024-05-15 19:45:48.734555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.615 [2024-05-15 19:45:48.734577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.615 [2024-05-15 19:45:48.734586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.615 [2024-05-15 19:45:48.747533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.615 [2024-05-15 19:45:48.747555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.615 [2024-05-15 19:45:48.747564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.615 [2024-05-15 19:45:48.759717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.615 [2024-05-15 19:45:48.759740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.615 [2024-05-15 19:45:48.759749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.615 [2024-05-15 19:45:48.772249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.615 [2024-05-15 19:45:48.772271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.615 [2024-05-15 19:45:48.772280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.615 [2024-05-15 19:45:48.784511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.615 [2024-05-15 19:45:48.784533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.615 [2024-05-15 19:45:48.784542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.615 [2024-05-15 19:45:48.798118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.615 [2024-05-15 19:45:48.798141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.615 [2024-05-15 19:45:48.798150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.876 [2024-05-15 19:45:48.811052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.876 [2024-05-15 19:45:48.811076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.876 [2024-05-15 19:45:48.811085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.876 [2024-05-15 19:45:48.823451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.876 [2024-05-15 19:45:48.823473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.876 [2024-05-15 19:45:48.823482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.876 [2024-05-15 19:45:48.836074] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.876 [2024-05-15 19:45:48.836097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.876 [2024-05-15 19:45:48.836106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.876 [2024-05-15 19:45:48.848705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.876 [2024-05-15 19:45:48.848728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.876 [2024-05-15 19:45:48.848741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.876 [2024-05-15 19:45:48.862325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.876 [2024-05-15 19:45:48.862348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.876 [2024-05-15 19:45:48.862356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.876 [2024-05-15 19:45:48.874614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.876 [2024-05-15 19:45:48.874636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.876 [2024-05-15 19:45:48.874645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.876 [2024-05-15 19:45:48.887242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.876 [2024-05-15 19:45:48.887264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.876 [2024-05-15 19:45:48.887273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.876 [2024-05-15 19:45:48.900064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.876 [2024-05-15 19:45:48.900086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.876 [2024-05-15 19:45:48.900095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.876 [2024-05-15 19:45:48.912244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.876 [2024-05-15 19:45:48.912267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.876 [2024-05-15 19:45:48.912276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.876 [2024-05-15 19:45:48.925796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.876 [2024-05-15 19:45:48.925819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.876 [2024-05-15 19:45:48.925828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.876 [2024-05-15 19:45:48.938751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.876 [2024-05-15 19:45:48.938774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.876 [2024-05-15 19:45:48.938782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.876 [2024-05-15 19:45:48.952424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.877 [2024-05-15 19:45:48.952446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.877 [2024-05-15 19:45:48.952455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.877 [2024-05-15 19:45:48.966454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.877 [2024-05-15 19:45:48.966481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.877 [2024-05-15 19:45:48.966491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.877 [2024-05-15 19:45:48.980275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.877 [2024-05-15 19:45:48.980297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.877 [2024-05-15 19:45:48.980306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.877 [2024-05-15 19:45:48.993361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.877 [2024-05-15 19:45:48.993384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.877 [2024-05-15 19:45:48.993393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.877 [2024-05-15 19:45:49.006939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.877 [2024-05-15 19:45:49.006962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.877 [2024-05-15 19:45:49.006970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.877 [2024-05-15 19:45:49.020027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.877 [2024-05-15 19:45:49.020050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.877 [2024-05-15 19:45:49.020059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.877 [2024-05-15 19:45:49.033628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.877 [2024-05-15 19:45:49.033651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.877 [2024-05-15 19:45:49.033659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.877 [2024-05-15 19:45:49.046796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.877 [2024-05-15 19:45:49.046818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.877 [2024-05-15 19:45:49.046827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.877 [2024-05-15 19:45:49.059238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:22.877 [2024-05-15 19:45:49.059260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.877 [2024-05-15 19:45:49.059269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:23.138 [2024-05-15 19:45:49.073168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.138 [2024-05-15 19:45:49.073192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.138 [2024-05-15 19:45:49.073201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:23.138 [2024-05-15 19:45:49.085975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.138 [2024-05-15 19:45:49.085998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.138 [2024-05-15 19:45:49.086006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.138 [2024-05-15 19:45:49.098716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.138 [2024-05-15 19:45:49.098738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.138 [2024-05-15 19:45:49.098747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:23.138 [2024-05-15 19:45:49.112505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.138 [2024-05-15 19:45:49.112528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.138 [2024-05-15 19:45:49.112537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:23.138 [2024-05-15 19:45:49.126586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.138 [2024-05-15 19:45:49.126609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.138 [2024-05-15 19:45:49.126618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:23.138 [2024-05-15 19:45:49.140582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.138 [2024-05-15 19:45:49.140605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.138 [2024-05-15 19:45:49.140613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.138 [2024-05-15 19:45:49.155294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.138 [2024-05-15 19:45:49.155324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.138 [2024-05-15 19:45:49.155333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:23.138 [2024-05-15 19:45:49.168428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.138 [2024-05-15 19:45:49.168450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.138 [2024-05-15 19:45:49.168458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:23.138 [2024-05-15 19:45:49.182348] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.138 [2024-05-15 19:45:49.182371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.138 [2024-05-15 19:45:49.182380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:23.138 [2024-05-15 19:45:49.196246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.138 [2024-05-15 19:45:49.196269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.138 [2024-05-15 19:45:49.196283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.138 [2024-05-15 19:45:49.210786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.138 [2024-05-15 19:45:49.210809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.138 [2024-05-15 19:45:49.210818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:23.138 [2024-05-15 19:45:49.224604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.138 [2024-05-15 19:45:49.224626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.138 [2024-05-15 19:45:49.224635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:23.138 [2024-05-15 19:45:49.237584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.138 [2024-05-15 19:45:49.237607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.138 [2024-05-15 19:45:49.237616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:23.138 [2024-05-15 19:45:49.251179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.138 [2024-05-15 19:45:49.251202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.138 [2024-05-15 19:45:49.251211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.138 [2024-05-15 19:45:49.264209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.138 [2024-05-15 19:45:49.264232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.138 [2024-05-15 19:45:49.264241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:23.138 [2024-05-15 19:45:49.277752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.138 [2024-05-15 19:45:49.277774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.139 [2024-05-15 19:45:49.277782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:23.139 [2024-05-15 19:45:49.291092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.139 [2024-05-15 19:45:49.291115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.139 [2024-05-15 19:45:49.291123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:23.139 [2024-05-15 19:45:49.305562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.139 [2024-05-15 19:45:49.305585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.139 [2024-05-15 19:45:49.305594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.139 [2024-05-15 19:45:49.319768] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.139 [2024-05-15 19:45:49.319795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.139 [2024-05-15 19:45:49.319804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:23.399 [2024-05-15 19:45:49.333864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.400 [2024-05-15 19:45:49.333888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.400 [2024-05-15 19:45:49.333896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:23.400 [2024-05-15 19:45:49.346898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.400 [2024-05-15 19:45:49.346921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.400 [2024-05-15 19:45:49.346929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:23.400 [2024-05-15 19:45:49.359958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11110a0) 00:30:23.400 [2024-05-15 19:45:49.359980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.400 [2024-05-15 19:45:49.359989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.400 00:30:23.400 Latency(us) 00:30:23.400 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.400 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:23.400 nvme0n1 : 2.00 2354.18 294.27 0.00 0.00 6788.75 1761.28 16384.00 00:30:23.400 =================================================================================================================== 00:30:23.400 Total : 2354.18 294.27 0.00 0.00 6788.75 1761.28 16384.00 00:30:23.400 0 00:30:23.400 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:23.400 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:23.400 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:23.400 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:23.400 | .driver_specific 00:30:23.400 | .nvme_error 00:30:23.400 | .status_code 00:30:23.400 | .command_transient_transport_error' 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 152 > 0 )) 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3788420 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3788420 ']' 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3788420 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3788420 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3788420' 00:30:23.661 killing process with pid 3788420 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3788420 00:30:23.661 Received shutdown signal, test time was about 2.000000 seconds 00:30:23.661 00:30:23.661 Latency(us) 00:30:23.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.661 =================================================================================================================== 00:30:23.661 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3788420 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3789094 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3789094 /var/tmp/bperf.sock 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3789094 ']' 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:23.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:23.661 19:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:23.661 [2024-05-15 19:45:49.827970] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:30:23.661 [2024-05-15 19:45:49.828024] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3789094 ] 00:30:23.921 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.921 [2024-05-15 19:45:49.893443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.921 [2024-05-15 19:45:49.956533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.921 19:45:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:23.921 19:45:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:30:23.921 19:45:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:23.921 19:45:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:24.181 19:45:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:24.181 19:45:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.181 19:45:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:24.181 19:45:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.181 19:45:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:24.181 19:45:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:24.441 nvme0n1 00:30:24.441 19:45:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:24.441 19:45:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.441 19:45:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:24.441 19:45:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.441 19:45:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:24.441 19:45:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:24.702 Running I/O for 2 seconds... 00:30:24.702 [2024-05-15 19:45:50.645604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.702 [2024-05-15 19:45:50.645929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.702 [2024-05-15 19:45:50.645961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.702 [2024-05-15 19:45:50.658192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.702 [2024-05-15 19:45:50.658487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.702 [2024-05-15 19:45:50.658510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.702 [2024-05-15 19:45:50.670733] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.702 [2024-05-15 19:45:50.671041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.702 [2024-05-15 19:45:50.671061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.703 [2024-05-15 19:45:50.683204] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.703 [2024-05-15 19:45:50.683465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.703 [2024-05-15 19:45:50.683485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.703 [2024-05-15 19:45:50.695670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.703 [2024-05-15 19:45:50.696009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.703 [2024-05-15 19:45:50.696029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.703 [2024-05-15 19:45:50.708098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.703 [2024-05-15 19:45:50.708305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.703 [2024-05-15 19:45:50.708330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.703 [2024-05-15 19:45:50.720491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.703 [2024-05-15 19:45:50.720814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.703 [2024-05-15 19:45:50.720834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.703 [2024-05-15 19:45:50.732906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.703 [2024-05-15 19:45:50.733232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.703 [2024-05-15 19:45:50.733252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.703 [2024-05-15 19:45:50.745298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.703 [2024-05-15 19:45:50.745657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.703 [2024-05-15 19:45:50.745677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.703 [2024-05-15 19:45:50.757717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.703 [2024-05-15 19:45:50.758032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.703 [2024-05-15 19:45:50.758052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.703 [2024-05-15 19:45:50.770090] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.703 [2024-05-15 19:45:50.770433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.703 [2024-05-15 19:45:50.770453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.703 [2024-05-15 19:45:50.782472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.703 [2024-05-15 19:45:50.782811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.703 [2024-05-15 19:45:50.782830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.703 [2024-05-15 19:45:50.794854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.703 [2024-05-15 19:45:50.795173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.703 [2024-05-15 19:45:50.795192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.703 [2024-05-15 19:45:50.807242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.703 [2024-05-15 19:45:50.807620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.703 [2024-05-15 19:45:50.807640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.703 [2024-05-15 19:45:50.819610] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.703 [2024-05-15 19:45:50.819946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.703 [2024-05-15 19:45:50.819966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.703 [2024-05-15 19:45:50.831986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.703 [2024-05-15 19:45:50.832337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.703 [2024-05-15 19:45:50.832357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.703 [2024-05-15 19:45:50.844365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.703 [2024-05-15 19:45:50.844575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.703 [2024-05-15 19:45:50.844594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.703 [2024-05-15 19:45:50.856731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.703 [2024-05-15 19:45:50.857047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.703 [2024-05-15 19:45:50.857066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.703 [2024-05-15 19:45:50.869081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.703 [2024-05-15 19:45:50.869396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.703 [2024-05-15 19:45:50.869416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.703 [2024-05-15 19:45:50.881473] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.703 [2024-05-15 19:45:50.881790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.703 [2024-05-15 19:45:50.881810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.965 [2024-05-15 19:45:50.893853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.965 [2024-05-15 19:45:50.894168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.965 [2024-05-15 19:45:50.894188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.965 [2024-05-15 19:45:50.906192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.965 [2024-05-15 19:45:50.906440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.965 [2024-05-15 19:45:50.906459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.965 [2024-05-15 19:45:50.918563] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.965 [2024-05-15 19:45:50.918841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.965 [2024-05-15 19:45:50.918861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.965 [2024-05-15 19:45:50.930922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.965 [2024-05-15 19:45:50.931237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.965 [2024-05-15 19:45:50.931261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.965 [2024-05-15 19:45:50.943283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.965 [2024-05-15 19:45:50.943634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.965 [2024-05-15 19:45:50.943654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.965 [2024-05-15 19:45:50.955641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.965 [2024-05-15 19:45:50.955960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.965 [2024-05-15 19:45:50.955980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.965 [2024-05-15 19:45:50.968031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.965 [2024-05-15 19:45:50.968358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.965 [2024-05-15 19:45:50.968377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.965 [2024-05-15 19:45:50.980399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.965 [2024-05-15 19:45:50.980677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.965 [2024-05-15 19:45:50.980697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.965 [2024-05-15 19:45:50.992761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.965 [2024-05-15 19:45:50.993042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.965 [2024-05-15 19:45:50.993062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.965 [2024-05-15 19:45:51.005101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.965 [2024-05-15 19:45:51.005422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.965 [2024-05-15 19:45:51.005443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.965 [2024-05-15 19:45:51.017484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.965 [2024-05-15 19:45:51.017810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.965 [2024-05-15 19:45:51.017829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.965 [2024-05-15 19:45:51.029850] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.965 [2024-05-15 19:45:51.030166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.965 [2024-05-15 19:45:51.030185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.965 [2024-05-15 19:45:51.042197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.965 [2024-05-15 19:45:51.042440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.965 [2024-05-15 19:45:51.042459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.965 [2024-05-15 19:45:51.054574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.965 [2024-05-15 19:45:51.054861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.965 [2024-05-15 19:45:51.054880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.965 [2024-05-15 19:45:51.066925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.965 [2024-05-15 19:45:51.067243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.965 [2024-05-15 19:45:51.067263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.965 [2024-05-15 19:45:51.079295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.965 [2024-05-15 19:45:51.079619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.965 [2024-05-15 19:45:51.079639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.965 [2024-05-15 19:45:51.091778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.965 [2024-05-15 19:45:51.092096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.965 [2024-05-15 19:45:51.092115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.965 [2024-05-15 19:45:51.104142] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.965 [2024-05-15 19:45:51.104442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.965 [2024-05-15 19:45:51.104462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.965 [2024-05-15 19:45:51.116495] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.965 [2024-05-15 19:45:51.116820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.965 [2024-05-15 19:45:51.116840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.965 [2024-05-15 19:45:51.128865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.965 [2024-05-15 19:45:51.129184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.965 [2024-05-15 19:45:51.129203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.965 [2024-05-15 19:45:51.141241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:24.965 [2024-05-15 19:45:51.141590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.965 [2024-05-15 19:45:51.141610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.226 [2024-05-15 19:45:51.153626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.226 [2024-05-15 19:45:51.153954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.226 [2024-05-15 19:45:51.153974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.226 [2024-05-15 19:45:51.165979] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.226 [2024-05-15 19:45:51.166294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.226 [2024-05-15 19:45:51.166318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.226 [2024-05-15 19:45:51.178366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.226 [2024-05-15 19:45:51.178661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.226 [2024-05-15 19:45:51.178680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.227 [2024-05-15 19:45:51.190730] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.227 [2024-05-15 19:45:51.191047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.227 [2024-05-15 19:45:51.191066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.227 [2024-05-15 19:45:51.203117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.227 [2024-05-15 19:45:51.203434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.227 [2024-05-15 19:45:51.203454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.227 [2024-05-15 19:45:51.215480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.227 [2024-05-15 19:45:51.215804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.227 [2024-05-15 19:45:51.215824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.227 [2024-05-15 19:45:51.227854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.227 [2024-05-15 19:45:51.228133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.227 [2024-05-15 19:45:51.228153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.227 [2024-05-15 19:45:51.240212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.227 [2024-05-15 19:45:51.240472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.227 [2024-05-15 19:45:51.240491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.227 [2024-05-15 19:45:51.252590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.227 [2024-05-15 19:45:51.252905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.227 [2024-05-15 19:45:51.252924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.227 [2024-05-15 19:45:51.264938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.227 [2024-05-15 19:45:51.265227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.227 [2024-05-15 19:45:51.265246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.227 [2024-05-15 19:45:51.277337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.227 [2024-05-15 19:45:51.277672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.227 [2024-05-15 19:45:51.277691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.227 [2024-05-15 19:45:51.289699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.227 [2024-05-15 19:45:51.290012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.227 [2024-05-15 19:45:51.290031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.227 [2024-05-15 19:45:51.302086] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.227 [2024-05-15 19:45:51.302375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.227 [2024-05-15 19:45:51.302394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.227 [2024-05-15 19:45:51.314445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.227 [2024-05-15 19:45:51.314741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.227 [2024-05-15 19:45:51.314760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.227 [2024-05-15 19:45:51.326814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.227 [2024-05-15 19:45:51.327149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.227 [2024-05-15 19:45:51.327168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.227 [2024-05-15 19:45:51.339171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.227 [2024-05-15 19:45:51.339451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.227 [2024-05-15 19:45:51.339471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.227 [2024-05-15 19:45:51.351561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.227 [2024-05-15 19:45:51.351896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.227 [2024-05-15 19:45:51.351915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.227 [2024-05-15 19:45:51.363919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.227 [2024-05-15 19:45:51.364235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.227 [2024-05-15 19:45:51.364258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.227 [2024-05-15 19:45:51.376281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.227 [2024-05-15 19:45:51.376632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.227 [2024-05-15 19:45:51.376652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.227 [2024-05-15 19:45:51.388640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.227 [2024-05-15 19:45:51.388962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.227 [2024-05-15 19:45:51.388982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.227 [2024-05-15 19:45:51.401022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.227 [2024-05-15 19:45:51.401338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.227 [2024-05-15 19:45:51.401358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.488 [2024-05-15 19:45:51.413426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.488 [2024-05-15 19:45:51.413741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.488 [2024-05-15 19:45:51.413760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.488 [2024-05-15 19:45:51.425799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.488 [2024-05-15 19:45:51.426125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.488 [2024-05-15 19:45:51.426145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.488 [2024-05-15 19:45:51.438144] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.488 [2024-05-15 19:45:51.438436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.488 [2024-05-15 19:45:51.438456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.488 [2024-05-15 19:45:51.450518] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.488 [2024-05-15 19:45:51.450835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.488 [2024-05-15 19:45:51.450854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.488 [2024-05-15 19:45:51.462824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.488 [2024-05-15 19:45:51.463138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.488 [2024-05-15 19:45:51.463158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.488 [2024-05-15 19:45:51.475293] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.488 [2024-05-15 19:45:51.475622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.488 [2024-05-15 19:45:51.475642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.488 [2024-05-15 19:45:51.487639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.488 [2024-05-15 19:45:51.487961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.488 [2024-05-15 19:45:51.487980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.488 [2024-05-15 19:45:51.500028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.488 [2024-05-15 19:45:51.500348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.488 [2024-05-15 19:45:51.500368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.488 [2024-05-15 19:45:51.512359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.488 [2024-05-15 19:45:51.512676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.488 [2024-05-15 19:45:51.512695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.488 [2024-05-15 19:45:51.524719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.488 [2024-05-15 19:45:51.525035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.488 [2024-05-15 19:45:51.525055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.488 [2024-05-15 19:45:51.537067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.488 [2024-05-15 19:45:51.537383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.488 [2024-05-15 19:45:51.537403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.488 [2024-05-15 19:45:51.549428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.488 [2024-05-15 19:45:51.549744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.488 [2024-05-15 19:45:51.549763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.488 [2024-05-15 19:45:51.561743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.488 [2024-05-15 19:45:51.562072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.488 [2024-05-15 19:45:51.562092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.488 [2024-05-15 19:45:51.574132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.488 [2024-05-15 19:45:51.574449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.488 [2024-05-15 19:45:51.574468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.488 [2024-05-15 19:45:51.586467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.488 [2024-05-15 19:45:51.586809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.489 [2024-05-15 19:45:51.586828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.489 [2024-05-15 19:45:51.598819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.489 [2024-05-15 19:45:51.599137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.489 [2024-05-15 19:45:51.599157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.489 [2024-05-15 19:45:51.611401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.489 [2024-05-15 19:45:51.611721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.489 [2024-05-15 19:45:51.611740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.489 [2024-05-15 19:45:51.623768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.489 [2024-05-15 19:45:51.624047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.489 [2024-05-15 19:45:51.624067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.489 [2024-05-15 19:45:51.636113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.489 [2024-05-15 19:45:51.636430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.489 [2024-05-15 19:45:51.636450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.489 [2024-05-15 19:45:51.648478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.489 [2024-05-15 19:45:51.648761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.489 [2024-05-15 19:45:51.648779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.489 [2024-05-15 19:45:51.660824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.489 [2024-05-15 19:45:51.661142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.489 [2024-05-15 19:45:51.661162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.673206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.673602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.673621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.685552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.685848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.685868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.697921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.698238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.698257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.710263] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.710604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.710623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.722648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.722965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.722984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.734986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.735190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.735210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.747347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.747670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.747689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.759665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.759982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.760001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.772023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.772344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.772363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.784361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.784682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.784701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.796715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.796996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.797019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.809054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.809259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.809277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.821416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.821738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.821757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.833780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.833986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.834004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.846138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.846417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.846436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.858478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.858792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.858811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.870849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.871164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.871184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.883179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.883460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.883480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.895530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.895843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.895863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.907900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.908221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.908240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.920256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.920580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.920600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.750 [2024-05-15 19:45:51.932616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:25.750 [2024-05-15 19:45:51.932902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.750 [2024-05-15 19:45:51.932921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.011 [2024-05-15 19:45:51.944977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.011 [2024-05-15 19:45:51.945292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.011 [2024-05-15 19:45:51.945311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.011 [2024-05-15 19:45:51.957328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.011 [2024-05-15 19:45:51.957644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.011 [2024-05-15 19:45:51.957663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.011 [2024-05-15 19:45:51.969685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.012 [2024-05-15 19:45:51.969998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.012 [2024-05-15 19:45:51.970018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.012 [2024-05-15 19:45:51.982018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.012 [2024-05-15 19:45:51.982333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.012 [2024-05-15 19:45:51.982353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.012 [2024-05-15 19:45:51.994408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.012 [2024-05-15 19:45:51.994741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.012 [2024-05-15 19:45:51.994760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.012 [2024-05-15 19:45:52.006759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.012 [2024-05-15 19:45:52.007136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.012 [2024-05-15 19:45:52.007155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.012 [2024-05-15 19:45:52.019113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.012 [2024-05-15 19:45:52.019428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.012 [2024-05-15 19:45:52.019448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.012 [2024-05-15 19:45:52.031443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.012 [2024-05-15 19:45:52.031760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.012 [2024-05-15 19:45:52.031779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.012 [2024-05-15 19:45:52.043798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.012 [2024-05-15 19:45:52.044115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.012 [2024-05-15 19:45:52.044135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.012 [2024-05-15 19:45:52.056125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.012 [2024-05-15 19:45:52.056441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.012 [2024-05-15 19:45:52.056460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.012 [2024-05-15 19:45:52.068496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.012 [2024-05-15 19:45:52.068812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.012 [2024-05-15 19:45:52.068831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.012 [2024-05-15 19:45:52.080831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.012 [2024-05-15 19:45:52.081038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.012 [2024-05-15 19:45:52.081057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.012 [2024-05-15 19:45:52.093244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.012 [2024-05-15 19:45:52.093481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.012 [2024-05-15 19:45:52.093500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.012 [2024-05-15 19:45:52.105626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.012 [2024-05-15 19:45:52.105944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.012 [2024-05-15 19:45:52.105963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.012 [2024-05-15 19:45:52.117948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.012 [2024-05-15 19:45:52.118234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.012 [2024-05-15 19:45:52.118253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.012 [2024-05-15 19:45:52.130310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.012 [2024-05-15 19:45:52.130638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.012 [2024-05-15 19:45:52.130657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.012 [2024-05-15 19:45:52.142682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.012 [2024-05-15 19:45:52.143000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.012 [2024-05-15 19:45:52.143019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.012 [2024-05-15 19:45:52.155021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.012 [2024-05-15 19:45:52.155338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.012 [2024-05-15 19:45:52.155357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.012 [2024-05-15 19:45:52.167380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.012 [2024-05-15 19:45:52.167694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.012 [2024-05-15 19:45:52.167714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.012 [2024-05-15 19:45:52.179705] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.012 [2024-05-15 19:45:52.180028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.012 [2024-05-15 19:45:52.180047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.012 [2024-05-15 19:45:52.192079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.012 [2024-05-15 19:45:52.192370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.012 [2024-05-15 19:45:52.192389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.273 [2024-05-15 19:45:52.204434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.273 [2024-05-15 19:45:52.204752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.273 [2024-05-15 19:45:52.204771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.273 [2024-05-15 19:45:52.216813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.273 [2024-05-15 19:45:52.217018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.273 [2024-05-15 19:45:52.217036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.273 [2024-05-15 19:45:52.229157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.273 [2024-05-15 19:45:52.229436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.273 [2024-05-15 19:45:52.229459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.273 [2024-05-15 19:45:52.241521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.273 [2024-05-15 19:45:52.241837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.273 [2024-05-15 19:45:52.241857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.273 [2024-05-15 19:45:52.253843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.273 [2024-05-15 19:45:52.254158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.273 [2024-05-15 19:45:52.254178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.273 [2024-05-15 19:45:52.266196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.273 [2024-05-15 19:45:52.266455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.273 [2024-05-15 19:45:52.266476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.273 [2024-05-15 19:45:52.278546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.273 [2024-05-15 19:45:52.278830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.273 [2024-05-15 19:45:52.278849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.273 [2024-05-15 19:45:52.290916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.273 [2024-05-15 19:45:52.291231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.273 [2024-05-15 19:45:52.291250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.273 [2024-05-15 19:45:52.303232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.273 [2024-05-15 19:45:52.303589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.273 [2024-05-15 19:45:52.303609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.274 [2024-05-15 19:45:52.315586] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.274 [2024-05-15 19:45:52.315905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.274 [2024-05-15 19:45:52.315924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.274 [2024-05-15 19:45:52.327932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.274 [2024-05-15 19:45:52.328215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.274 [2024-05-15 19:45:52.328235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.274 [2024-05-15 19:45:52.340294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.274 [2024-05-15 19:45:52.340601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.274 [2024-05-15 19:45:52.340620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.274 [2024-05-15 19:45:52.352660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.274 [2024-05-15 19:45:52.352978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.274 [2024-05-15 19:45:52.352997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.274 [2024-05-15 19:45:52.365016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.274 [2024-05-15 19:45:52.365223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.274 [2024-05-15 19:45:52.365241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.274 [2024-05-15 19:45:52.377360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.274 [2024-05-15 19:45:52.377670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.274 [2024-05-15 19:45:52.377689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.274 [2024-05-15 19:45:52.389726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.274 [2024-05-15 19:45:52.389948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.274 [2024-05-15 19:45:52.389966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.274 [2024-05-15 19:45:52.402104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.274 [2024-05-15 19:45:52.402447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.274 [2024-05-15 19:45:52.402467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.274 [2024-05-15 19:45:52.414484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.274 [2024-05-15 19:45:52.414801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.274 [2024-05-15 19:45:52.414820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.274 [2024-05-15 19:45:52.426833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.274 [2024-05-15 19:45:52.427147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.274 [2024-05-15 19:45:52.427167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.274 [2024-05-15 19:45:52.439213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.274 [2024-05-15 19:45:52.439472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.274 [2024-05-15 19:45:52.439492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.274 [2024-05-15 19:45:52.451602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.274 [2024-05-15 19:45:52.451914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.274 [2024-05-15 19:45:52.451933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.534 [2024-05-15 19:45:52.463953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.534 [2024-05-15 19:45:52.464163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.534 [2024-05-15 19:45:52.464182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.534 [2024-05-15 19:45:52.476346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.534 [2024-05-15 19:45:52.476703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.534 [2024-05-15 19:45:52.476723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.534 [2024-05-15 19:45:52.488727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.534 [2024-05-15 19:45:52.489054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.534 [2024-05-15 19:45:52.489073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.534 [2024-05-15 19:45:52.501057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.534 [2024-05-15 19:45:52.501361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.534 [2024-05-15 19:45:52.501380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.534 [2024-05-15 19:45:52.513470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.534 [2024-05-15 19:45:52.513775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.534 [2024-05-15 19:45:52.513794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.534 [2024-05-15 19:45:52.525833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.534 [2024-05-15 19:45:52.526040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.534 [2024-05-15 19:45:52.526058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.534 [2024-05-15 19:45:52.538189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.534 [2024-05-15 19:45:52.538470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.534 [2024-05-15 19:45:52.538490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.534 [2024-05-15 19:45:52.550540] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.534 [2024-05-15 19:45:52.550857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.534 [2024-05-15 19:45:52.550876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.534 [2024-05-15 19:45:52.562915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.534 [2024-05-15 19:45:52.563239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.534 [2024-05-15 19:45:52.563258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.534 [2024-05-15 19:45:52.575253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.534 [2024-05-15 19:45:52.575480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.534 [2024-05-15 19:45:52.575499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.534 [2024-05-15 19:45:52.587610] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.534 [2024-05-15 19:45:52.587891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.534 [2024-05-15 19:45:52.587911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.534 [2024-05-15 19:45:52.599938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.534 [2024-05-15 19:45:52.600260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.534 [2024-05-15 19:45:52.600280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.534 [2024-05-15 19:45:52.612555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.534 [2024-05-15 19:45:52.612872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.534 [2024-05-15 19:45:52.612891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.534 [2024-05-15 19:45:52.624925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69fe50) with pdu=0x2000190fd208 00:30:26.534 [2024-05-15 19:45:52.625223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.534 [2024-05-15 19:45:52.625242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.534 00:30:26.534 Latency(us) 00:30:26.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.534 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:26.534 nvme0n1 : 2.01 20594.30 80.45 0.00 0.00 6201.50 5925.55 13052.59 00:30:26.534 =================================================================================================================== 00:30:26.534 Total : 20594.30 80.45 0.00 0.00 6201.50 5925.55 13052.59 00:30:26.534 0 00:30:26.534 19:45:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:26.534 19:45:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:26.534 19:45:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:26.534 | .driver_specific 00:30:26.534 | .nvme_error 00:30:26.534 | .status_code 00:30:26.534 | .command_transient_transport_error' 00:30:26.534 19:45:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:26.795 19:45:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 161 > 0 )) 00:30:26.795 19:45:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3789094 00:30:26.795 19:45:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3789094 ']' 00:30:26.795 19:45:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3789094 00:30:26.795 19:45:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:30:26.795 19:45:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:26.795 19:45:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3789094 00:30:26.795 19:45:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:26.795 19:45:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:26.795 19:45:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3789094' 00:30:26.795 killing process with pid 3789094 00:30:26.795 19:45:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3789094 00:30:26.795 Received shutdown signal, test time was about 2.000000 seconds 00:30:26.795 00:30:26.795 Latency(us) 00:30:26.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.795 =================================================================================================================== 00:30:26.795 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:26.795 19:45:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3789094 00:30:27.055 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:27.055 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:27.055 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:27.055 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:27.055 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:27.055 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3789720 00:30:27.055 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3789720 /var/tmp/bperf.sock 00:30:27.055 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3789720 ']' 00:30:27.055 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:27.055 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:27.055 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:27.055 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:27.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:27.055 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:27.055 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:27.055 [2024-05-15 19:45:53.105420] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:30:27.055 [2024-05-15 19:45:53.105475] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3789720 ] 00:30:27.055 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:27.055 Zero copy mechanism will not be used. 00:30:27.055 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.055 [2024-05-15 19:45:53.169428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.055 [2024-05-15 19:45:53.233139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.315 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:27.315 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:30:27.315 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:27.315 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:27.576 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:27.576 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.576 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:27.576 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.576 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:27.576 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:27.848 nvme0n1 00:30:27.848 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:27.848 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.848 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:27.848 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.848 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:27.848 19:45:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:28.109 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:28.109 Zero copy mechanism will not be used. 00:30:28.109 Running I/O for 2 seconds... 00:30:28.109 [2024-05-15 19:45:54.091054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.109 [2024-05-15 19:45:54.091550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.109 [2024-05-15 19:45:54.091581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.109 [2024-05-15 19:45:54.105778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.109 [2024-05-15 19:45:54.106195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.109 [2024-05-15 19:45:54.106221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.109 [2024-05-15 19:45:54.117566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.109 [2024-05-15 19:45:54.117715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.109 [2024-05-15 19:45:54.117735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.109 [2024-05-15 19:45:54.128745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.109 [2024-05-15 19:45:54.129158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.109 [2024-05-15 19:45:54.129180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.109 [2024-05-15 19:45:54.140256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.109 [2024-05-15 19:45:54.140655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.109 [2024-05-15 19:45:54.140677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.109 [2024-05-15 19:45:54.151617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.109 [2024-05-15 19:45:54.151731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.109 [2024-05-15 19:45:54.151751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.109 [2024-05-15 19:45:54.162376] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.109 [2024-05-15 19:45:54.162474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.109 [2024-05-15 19:45:54.162493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.109 [2024-05-15 19:45:54.173819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.109 [2024-05-15 19:45:54.174207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.109 [2024-05-15 19:45:54.174228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.109 [2024-05-15 19:45:54.184044] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.109 [2024-05-15 19:45:54.184180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.109 [2024-05-15 19:45:54.184199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.109 [2024-05-15 19:45:54.194675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.110 [2024-05-15 19:45:54.195061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.110 [2024-05-15 19:45:54.195082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.110 [2024-05-15 19:45:54.204451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.110 [2024-05-15 19:45:54.204713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.110 [2024-05-15 19:45:54.204734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.110 [2024-05-15 19:45:54.214955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.110 [2024-05-15 19:45:54.215347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.110 [2024-05-15 19:45:54.215368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.110 [2024-05-15 19:45:54.224039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.110 [2024-05-15 19:45:54.224400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.110 [2024-05-15 19:45:54.224422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.110 [2024-05-15 19:45:54.232704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.110 [2024-05-15 19:45:54.233107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.110 [2024-05-15 19:45:54.233128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.110 [2024-05-15 19:45:54.241321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.110 [2024-05-15 19:45:54.241682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.110 [2024-05-15 19:45:54.241703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.110 [2024-05-15 19:45:54.251235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.110 [2024-05-15 19:45:54.251603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.110 [2024-05-15 19:45:54.251624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.110 [2024-05-15 19:45:54.260377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.110 [2024-05-15 19:45:54.260767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.110 [2024-05-15 19:45:54.260787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.110 [2024-05-15 19:45:54.269294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.110 [2024-05-15 19:45:54.269692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.110 [2024-05-15 19:45:54.269713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.110 [2024-05-15 19:45:54.278844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.110 [2024-05-15 19:45:54.279104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.110 [2024-05-15 19:45:54.279126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.110 [2024-05-15 19:45:54.287594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.110 [2024-05-15 19:45:54.287978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.110 [2024-05-15 19:45:54.287998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.296989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.297283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.297307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.306542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.306789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.306811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.313409] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.313669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.313691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.320288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.320553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.320575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.326652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.327029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.327051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.335358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.335751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.335771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.344363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.344732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.344753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.353012] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.353298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.353325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.361807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.362050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.362071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.369652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.369901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.369923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.378287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.378616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.378637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.386903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.387203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.387224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.392930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.393339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.393360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.400793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.401036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.401056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.406234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.406567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.406588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.415421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.415786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.415807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.421669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.421945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.421966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.427554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.427917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.427938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.433279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.433745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.433766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.439794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.440120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.440141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.447146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.447396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.447417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.454687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.454929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.454950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.461151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.461400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.461421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.467670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.372 [2024-05-15 19:45:54.468046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.372 [2024-05-15 19:45:54.468066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.372 [2024-05-15 19:45:54.473510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.373 [2024-05-15 19:45:54.473880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.373 [2024-05-15 19:45:54.473901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.373 [2024-05-15 19:45:54.481079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.373 [2024-05-15 19:45:54.481571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.373 [2024-05-15 19:45:54.481592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.373 [2024-05-15 19:45:54.490574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.373 [2024-05-15 19:45:54.490984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.373 [2024-05-15 19:45:54.491012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.373 [2024-05-15 19:45:54.499496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.373 [2024-05-15 19:45:54.499785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.373 [2024-05-15 19:45:54.499806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.373 [2024-05-15 19:45:54.507464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.373 [2024-05-15 19:45:54.507804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.373 [2024-05-15 19:45:54.507824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.373 [2024-05-15 19:45:54.515369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.373 [2024-05-15 19:45:54.515757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.373 [2024-05-15 19:45:54.515778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.373 [2024-05-15 19:45:54.524826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.373 [2024-05-15 19:45:54.525186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.373 [2024-05-15 19:45:54.525207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.373 [2024-05-15 19:45:54.531925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.373 [2024-05-15 19:45:54.532170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.373 [2024-05-15 19:45:54.532192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.373 [2024-05-15 19:45:54.540451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.373 [2024-05-15 19:45:54.540694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.373 [2024-05-15 19:45:54.540714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.373 [2024-05-15 19:45:54.551410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.373 [2024-05-15 19:45:54.551865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.373 [2024-05-15 19:45:54.551885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.638 [2024-05-15 19:45:54.561003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.638 [2024-05-15 19:45:54.561258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.638 [2024-05-15 19:45:54.561279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.638 [2024-05-15 19:45:54.567839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.638 [2024-05-15 19:45:54.568085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.638 [2024-05-15 19:45:54.568106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.638 [2024-05-15 19:45:54.575555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.638 [2024-05-15 19:45:54.575941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.638 [2024-05-15 19:45:54.575962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.638 [2024-05-15 19:45:54.583696] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.638 [2024-05-15 19:45:54.584019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.638 [2024-05-15 19:45:54.584039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.638 [2024-05-15 19:45:54.593801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.638 [2024-05-15 19:45:54.594100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.638 [2024-05-15 19:45:54.594121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.638 [2024-05-15 19:45:54.603946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.638 [2024-05-15 19:45:54.604222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.638 [2024-05-15 19:45:54.604243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.638 [2024-05-15 19:45:54.611512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.638 [2024-05-15 19:45:54.611770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.638 [2024-05-15 19:45:54.611792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.638 [2024-05-15 19:45:54.619776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.638 [2024-05-15 19:45:54.620145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.638 [2024-05-15 19:45:54.620166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.629618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.639 [2024-05-15 19:45:54.629916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.639 [2024-05-15 19:45:54.629937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.638694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.639 [2024-05-15 19:45:54.639027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.639 [2024-05-15 19:45:54.639047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.646874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.639 [2024-05-15 19:45:54.647256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.639 [2024-05-15 19:45:54.647276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.656328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.639 [2024-05-15 19:45:54.656706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.639 [2024-05-15 19:45:54.656726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.665159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.639 [2024-05-15 19:45:54.665407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.639 [2024-05-15 19:45:54.665427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.673186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.639 [2024-05-15 19:45:54.673529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.639 [2024-05-15 19:45:54.673550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.682272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.639 [2024-05-15 19:45:54.682665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.639 [2024-05-15 19:45:54.682685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.689703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.639 [2024-05-15 19:45:54.690097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.639 [2024-05-15 19:45:54.690117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.699103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.639 [2024-05-15 19:45:54.699306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.639 [2024-05-15 19:45:54.699331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.708668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.639 [2024-05-15 19:45:54.708863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.639 [2024-05-15 19:45:54.708882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.717119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.639 [2024-05-15 19:45:54.717421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.639 [2024-05-15 19:45:54.717447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.726488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.639 [2024-05-15 19:45:54.726922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.639 [2024-05-15 19:45:54.726942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.737931] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.639 [2024-05-15 19:45:54.738328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.639 [2024-05-15 19:45:54.738349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.749630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.639 [2024-05-15 19:45:54.750019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.639 [2024-05-15 19:45:54.750040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.760030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.639 [2024-05-15 19:45:54.760240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.639 [2024-05-15 19:45:54.760260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.770410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.639 [2024-05-15 19:45:54.770750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.639 [2024-05-15 19:45:54.770771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.780098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.639 [2024-05-15 19:45:54.780603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.639 [2024-05-15 19:45:54.780623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.791463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.639 [2024-05-15 19:45:54.791824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.639 [2024-05-15 19:45:54.791845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.801329] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.639 [2024-05-15 19:45:54.801490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.639 [2024-05-15 19:45:54.801509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.808847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.639 [2024-05-15 19:45:54.809102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.639 [2024-05-15 19:45:54.809123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.815881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.639 [2024-05-15 19:45:54.816140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.639 [2024-05-15 19:45:54.816162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.639 [2024-05-15 19:45:54.821953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.931 [2024-05-15 19:45:54.822204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.931 [2024-05-15 19:45:54.822225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.931 [2024-05-15 19:45:54.828674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.931 [2024-05-15 19:45:54.828918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.931 [2024-05-15 19:45:54.828940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.931 [2024-05-15 19:45:54.834309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.931 [2024-05-15 19:45:54.834634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.931 [2024-05-15 19:45:54.834655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.931 [2024-05-15 19:45:54.840807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.931 [2024-05-15 19:45:54.841076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.931 [2024-05-15 19:45:54.841096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.931 [2024-05-15 19:45:54.847449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.931 [2024-05-15 19:45:54.847695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.931 [2024-05-15 19:45:54.847717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.931 [2024-05-15 19:45:54.856244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.931 [2024-05-15 19:45:54.856603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.931 [2024-05-15 19:45:54.856623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.931 [2024-05-15 19:45:54.864546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.931 [2024-05-15 19:45:54.864813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.931 [2024-05-15 19:45:54.864834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.931 [2024-05-15 19:45:54.871792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.931 [2024-05-15 19:45:54.872036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:54.872056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:54.880684] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:54.881086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:54.881107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:54.889714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:54.890238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:54.890259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:54.899914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:54.900159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:54.900180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:54.906582] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:54.906829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:54.906850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:54.912634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:54.913030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:54.913051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:54.918827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:54.919157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:54.919178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:54.925588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:54.925982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:54.926004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:54.932867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:54.933113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:54.933138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:54.939892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:54.940218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:54.940239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:54.945712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:54.945995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:54.946016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:54.951731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:54.951975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:54.951996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:54.958269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:54.958513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:54.958534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:54.964177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:54.964540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:54.964561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:54.970570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:54.970978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:54.970998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:54.977561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:54.977812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:54.977832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:54.985079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:54.985468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:54.985489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:54.991341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:54.991591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:54.991610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:54.998412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:54.998689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:54.998709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:55.005222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:55.005488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:55.005510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:55.011569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:55.011811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:55.011831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:55.018136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:55.018387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:55.018408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:55.023486] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:55.023729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:55.023750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:55.030429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:55.030673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:55.030693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:55.036292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:55.036716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:55.036737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:55.043964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:55.044207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:55.044226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:55.053305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:55.053681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:55.053703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:55.062252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:55.062502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:55.062524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:55.068778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:55.069023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:55.069044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:55.075691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:55.075935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:55.075955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:55.081886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:55.082131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:55.082151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:55.088071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:55.088321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:55.088341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:55.095247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:55.095611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:55.095632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.932 [2024-05-15 19:45:55.101828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:28.932 [2024-05-15 19:45:55.102075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.932 [2024-05-15 19:45:55.102096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.109073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.109321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.109345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.115912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.116253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.116275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.122603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.123016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.123037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.128629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.128871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.128892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.135036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.135304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.135331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.143234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.143584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.143606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.151759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.152048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.152070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.160371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.160614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.160635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.166478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.166812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.166834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.173720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.173962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.173982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.179993] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.180245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.180266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.186306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.186587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.186609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.193905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.194184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.194204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.201995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.202270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.202290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.210579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.210820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.210841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.216394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.216711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.216732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.223654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.223899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.223921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.230155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.230497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.230522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.237831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.238236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.238257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.246946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.247189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.247210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.255419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.255774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.255795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.264590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.264832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.264853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.273356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.273748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.273769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.280861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.281142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.281163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.218 [2024-05-15 19:45:55.289460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.218 [2024-05-15 19:45:55.289876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.218 [2024-05-15 19:45:55.289896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.219 [2024-05-15 19:45:55.297072] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.219 [2024-05-15 19:45:55.297370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.219 [2024-05-15 19:45:55.297390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.219 [2024-05-15 19:45:55.305782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.219 [2024-05-15 19:45:55.306024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.219 [2024-05-15 19:45:55.306049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.219 [2024-05-15 19:45:55.311411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.219 [2024-05-15 19:45:55.311691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.219 [2024-05-15 19:45:55.311712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.219 [2024-05-15 19:45:55.317650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.219 [2024-05-15 19:45:55.317894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.219 [2024-05-15 19:45:55.317914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.219 [2024-05-15 19:45:55.326215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.219 [2024-05-15 19:45:55.326637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.219 [2024-05-15 19:45:55.326658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.219 [2024-05-15 19:45:55.334664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.219 [2024-05-15 19:45:55.334927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.219 [2024-05-15 19:45:55.334948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.219 [2024-05-15 19:45:55.339865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.219 [2024-05-15 19:45:55.340109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.219 [2024-05-15 19:45:55.340129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.219 [2024-05-15 19:45:55.344747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.219 [2024-05-15 19:45:55.344991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.219 [2024-05-15 19:45:55.345010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.219 [2024-05-15 19:45:55.350198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.219 [2024-05-15 19:45:55.350445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.219 [2024-05-15 19:45:55.350465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.219 [2024-05-15 19:45:55.357662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.219 [2024-05-15 19:45:55.358007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.219 [2024-05-15 19:45:55.358028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.219 [2024-05-15 19:45:55.367162] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.219 [2024-05-15 19:45:55.367584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.219 [2024-05-15 19:45:55.367605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.219 [2024-05-15 19:45:55.376557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.219 [2024-05-15 19:45:55.376907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.219 [2024-05-15 19:45:55.376927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.219 [2024-05-15 19:45:55.386259] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.219 [2024-05-15 19:45:55.386623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.219 [2024-05-15 19:45:55.386644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.486 [2024-05-15 19:45:55.396066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.486 [2024-05-15 19:45:55.396439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.486 [2024-05-15 19:45:55.396460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.486 [2024-05-15 19:45:55.406657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.486 [2024-05-15 19:45:55.407059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.486 [2024-05-15 19:45:55.407080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.486 [2024-05-15 19:45:55.416891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.486 [2024-05-15 19:45:55.417245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.486 [2024-05-15 19:45:55.417266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.486 [2024-05-15 19:45:55.425389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.486 [2024-05-15 19:45:55.425640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.486 [2024-05-15 19:45:55.425660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.486 [2024-05-15 19:45:55.434042] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.486 [2024-05-15 19:45:55.434561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.486 [2024-05-15 19:45:55.434582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.486 [2024-05-15 19:45:55.445320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.486 [2024-05-15 19:45:55.445703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.486 [2024-05-15 19:45:55.445727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.486 [2024-05-15 19:45:55.457132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.486 [2024-05-15 19:45:55.457538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.486 [2024-05-15 19:45:55.457560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.486 [2024-05-15 19:45:55.467193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.486 [2024-05-15 19:45:55.467467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.486 [2024-05-15 19:45:55.467488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.486 [2024-05-15 19:45:55.476558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.486 [2024-05-15 19:45:55.476900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.486 [2024-05-15 19:45:55.476921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.486 [2024-05-15 19:45:55.483746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.486 [2024-05-15 19:45:55.483989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.486 [2024-05-15 19:45:55.484010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.486 [2024-05-15 19:45:55.492003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.486 [2024-05-15 19:45:55.492392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.486 [2024-05-15 19:45:55.492413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.486 [2024-05-15 19:45:55.501082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.501332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.501353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.487 [2024-05-15 19:45:55.507812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.508055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.508075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.487 [2024-05-15 19:45:55.513687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.513949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.513970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.487 [2024-05-15 19:45:55.520800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.521072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.521093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.487 [2024-05-15 19:45:55.527566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.527935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.527955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.487 [2024-05-15 19:45:55.534488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.534733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.534753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.487 [2024-05-15 19:45:55.542144] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.542393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.542413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.487 [2024-05-15 19:45:55.550091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.550340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.550361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.487 [2024-05-15 19:45:55.557011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.557251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.557272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.487 [2024-05-15 19:45:55.565482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.565822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.565843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.487 [2024-05-15 19:45:55.575556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.575929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.575950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.487 [2024-05-15 19:45:55.585219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.585597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.585618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.487 [2024-05-15 19:45:55.593560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.593941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.593962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.487 [2024-05-15 19:45:55.602597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.602848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.602869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.487 [2024-05-15 19:45:55.610349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.610582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.610604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.487 [2024-05-15 19:45:55.616001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.616231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.616250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.487 [2024-05-15 19:45:55.624190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.624580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.624601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.487 [2024-05-15 19:45:55.631612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.631861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.631882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.487 [2024-05-15 19:45:55.641579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.641959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.641980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.487 [2024-05-15 19:45:55.649769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.650039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.650060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.487 [2024-05-15 19:45:55.657096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.657332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.657355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.487 [2024-05-15 19:45:55.663792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.487 [2024-05-15 19:45:55.664024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.487 [2024-05-15 19:45:55.664045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.749 [2024-05-15 19:45:55.670663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.749 [2024-05-15 19:45:55.670846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.749 [2024-05-15 19:45:55.670865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.749 [2024-05-15 19:45:55.676672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.749 [2024-05-15 19:45:55.676904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.749 [2024-05-15 19:45:55.676923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.749 [2024-05-15 19:45:55.683481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.749 [2024-05-15 19:45:55.683863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.749 [2024-05-15 19:45:55.683883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.749 [2024-05-15 19:45:55.690622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.749 [2024-05-15 19:45:55.690851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.749 [2024-05-15 19:45:55.690871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.749 [2024-05-15 19:45:55.695715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.749 [2024-05-15 19:45:55.696071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.749 [2024-05-15 19:45:55.696093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.749 [2024-05-15 19:45:55.701562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.749 [2024-05-15 19:45:55.701793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.749 [2024-05-15 19:45:55.701822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.749 [2024-05-15 19:45:55.705974] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.749 [2024-05-15 19:45:55.706204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.749 [2024-05-15 19:45:55.706224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.749 [2024-05-15 19:45:55.712904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.749 [2024-05-15 19:45:55.713138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.749 [2024-05-15 19:45:55.713158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.749 [2024-05-15 19:45:55.721809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.749 [2024-05-15 19:45:55.722189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.749 [2024-05-15 19:45:55.722210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.749 [2024-05-15 19:45:55.731995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.749 [2024-05-15 19:45:55.732269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.749 [2024-05-15 19:45:55.732290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.749 [2024-05-15 19:45:55.743155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.749 [2024-05-15 19:45:55.743542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.749 [2024-05-15 19:45:55.743562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.749 [2024-05-15 19:45:55.753611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.749 [2024-05-15 19:45:55.753896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.749 [2024-05-15 19:45:55.753916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.749 [2024-05-15 19:45:55.765418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.749 [2024-05-15 19:45:55.765822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.749 [2024-05-15 19:45:55.765843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.749 [2024-05-15 19:45:55.776046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.749 [2024-05-15 19:45:55.776570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.749 [2024-05-15 19:45:55.776591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.749 [2024-05-15 19:45:55.786825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.749 [2024-05-15 19:45:55.787308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.749 [2024-05-15 19:45:55.787334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.749 [2024-05-15 19:45:55.797252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.749 [2024-05-15 19:45:55.797664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.749 [2024-05-15 19:45:55.797685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.749 [2024-05-15 19:45:55.808469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.749 [2024-05-15 19:45:55.808851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.749 [2024-05-15 19:45:55.808871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.750 [2024-05-15 19:45:55.819340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.750 [2024-05-15 19:45:55.819607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.750 [2024-05-15 19:45:55.819628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.750 [2024-05-15 19:45:55.827558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.750 [2024-05-15 19:45:55.827802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.750 [2024-05-15 19:45:55.827823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.750 [2024-05-15 19:45:55.835633] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.750 [2024-05-15 19:45:55.835940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.750 [2024-05-15 19:45:55.835960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.750 [2024-05-15 19:45:55.844694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.750 [2024-05-15 19:45:55.844926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.750 [2024-05-15 19:45:55.844946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.750 [2024-05-15 19:45:55.852914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.750 [2024-05-15 19:45:55.853264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.750 [2024-05-15 19:45:55.853285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.750 [2024-05-15 19:45:55.859111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.750 [2024-05-15 19:45:55.859348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.750 [2024-05-15 19:45:55.859368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.750 [2024-05-15 19:45:55.865237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.750 [2024-05-15 19:45:55.865503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.750 [2024-05-15 19:45:55.865525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.750 [2024-05-15 19:45:55.872349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.750 [2024-05-15 19:45:55.872607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.750 [2024-05-15 19:45:55.872635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.750 [2024-05-15 19:45:55.879666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.750 [2024-05-15 19:45:55.879896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.750 [2024-05-15 19:45:55.879915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.750 [2024-05-15 19:45:55.888585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.750 [2024-05-15 19:45:55.888816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.750 [2024-05-15 19:45:55.888844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.750 [2024-05-15 19:45:55.896268] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.750 [2024-05-15 19:45:55.896525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.750 [2024-05-15 19:45:55.896546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.750 [2024-05-15 19:45:55.906971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.750 [2024-05-15 19:45:55.907250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.750 [2024-05-15 19:45:55.907271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.750 [2024-05-15 19:45:55.914666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.750 [2024-05-15 19:45:55.914971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.750 [2024-05-15 19:45:55.914992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.750 [2024-05-15 19:45:55.922608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.750 [2024-05-15 19:45:55.922841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.750 [2024-05-15 19:45:55.922862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.750 [2024-05-15 19:45:55.931137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:29.750 [2024-05-15 19:45:55.931371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.750 [2024-05-15 19:45:55.931390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.011 [2024-05-15 19:45:55.938439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:30.011 [2024-05-15 19:45:55.938847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.011 [2024-05-15 19:45:55.938867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.011 [2024-05-15 19:45:55.946253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:30.011 [2024-05-15 19:45:55.946529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.011 [2024-05-15 19:45:55.946550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.011 [2024-05-15 19:45:55.953423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:30.011 [2024-05-15 19:45:55.953670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.011 [2024-05-15 19:45:55.953691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.011 [2024-05-15 19:45:55.961720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:30.011 [2024-05-15 19:45:55.961973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.011 [2024-05-15 19:45:55.961993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.011 [2024-05-15 19:45:55.970879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:30.011 [2024-05-15 19:45:55.971221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.011 [2024-05-15 19:45:55.971243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.011 [2024-05-15 19:45:55.980442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:30.011 [2024-05-15 19:45:55.980734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.011 [2024-05-15 19:45:55.980755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.011 [2024-05-15 19:45:55.986941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:30.011 [2024-05-15 19:45:55.987172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.011 [2024-05-15 19:45:55.987191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.011 [2024-05-15 19:45:55.992612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:30.011 [2024-05-15 19:45:55.992915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.011 [2024-05-15 19:45:55.992935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.011 [2024-05-15 19:45:55.999395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:30.011 [2024-05-15 19:45:55.999697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.011 [2024-05-15 19:45:55.999718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.011 [2024-05-15 19:45:56.007987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:30.011 [2024-05-15 19:45:56.008224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.012 [2024-05-15 19:45:56.008244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.012 [2024-05-15 19:45:56.013963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:30.012 [2024-05-15 19:45:56.014232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.012 [2024-05-15 19:45:56.014253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.012 [2024-05-15 19:45:56.021597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:30.012 [2024-05-15 19:45:56.021857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.012 [2024-05-15 19:45:56.021877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.012 [2024-05-15 19:45:56.027043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:30.012 [2024-05-15 19:45:56.027429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.012 [2024-05-15 19:45:56.027451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.012 [2024-05-15 19:45:56.034982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:30.012 [2024-05-15 19:45:56.035297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.012 [2024-05-15 19:45:56.035323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.012 [2024-05-15 19:45:56.043814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:30.012 [2024-05-15 19:45:56.044303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.012 [2024-05-15 19:45:56.044329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.012 [2024-05-15 19:45:56.052987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:30.012 [2024-05-15 19:45:56.053388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.012 [2024-05-15 19:45:56.053409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.012 [2024-05-15 19:45:56.062952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:30.012 [2024-05-15 19:45:56.063263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.012 [2024-05-15 19:45:56.063283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.012 [2024-05-15 19:45:56.074290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6a0200) with pdu=0x2000190fef90 00:30:30.012 [2024-05-15 19:45:56.074739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.012 [2024-05-15 19:45:56.074760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.012 00:30:30.012 Latency(us) 00:30:30.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.012 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:30.012 nvme0n1 : 2.01 3793.28 474.16 0.00 0.00 4209.31 1993.39 15510.19 00:30:30.012 =================================================================================================================== 00:30:30.012 Total : 3793.28 474.16 0.00 0.00 4209.31 1993.39 15510.19 00:30:30.012 0 00:30:30.012 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:30.012 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:30.012 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:30.012 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:30.012 | .driver_specific 00:30:30.012 | .nvme_error 00:30:30.012 | .status_code 00:30:30.012 | .command_transient_transport_error' 00:30:30.273 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 245 > 0 )) 00:30:30.273 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3789720 00:30:30.273 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3789720 ']' 00:30:30.273 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3789720 00:30:30.273 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:30:30.273 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:30.273 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3789720 00:30:30.273 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:30.273 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:30.273 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3789720' 00:30:30.273 killing process with pid 3789720 00:30:30.273 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3789720 00:30:30.273 Received shutdown signal, test time was about 2.000000 seconds 00:30:30.273 00:30:30.273 Latency(us) 00:30:30.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.273 =================================================================================================================== 00:30:30.273 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:30.273 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3789720 00:30:30.534 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3787563 00:30:30.534 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3787563 ']' 00:30:30.534 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3787563 00:30:30.534 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:30:30.534 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:30.534 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3787563 00:30:30.534 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:30.534 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:30.534 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3787563' 00:30:30.534 killing process with pid 3787563 00:30:30.534 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3787563 00:30:30.534 [2024-05-15 19:45:56.561886] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:30.534 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3787563 00:30:30.534 00:30:30.534 real 0m14.668s 00:30:30.535 user 0m28.845s 00:30:30.535 sys 0m3.294s 00:30:30.535 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:30.535 19:45:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:30.535 ************************************ 00:30:30.535 END TEST nvmf_digest_error 00:30:30.535 ************************************ 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:30.796 rmmod nvme_tcp 00:30:30.796 rmmod nvme_fabrics 00:30:30.796 rmmod nvme_keyring 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3787563 ']' 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3787563 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 3787563 ']' 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 3787563 00:30:30.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3787563) - No such process 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 3787563 is not found' 00:30:30.796 Process with pid 3787563 is not found 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:30.796 19:45:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.710 19:45:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:32.710 00:30:32.710 real 0m39.966s 00:30:32.710 user 0m59.764s 00:30:32.710 sys 0m12.948s 00:30:32.710 19:45:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:32.710 19:45:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:32.710 ************************************ 00:30:32.710 END TEST nvmf_digest 00:30:32.710 ************************************ 00:30:32.971 19:45:58 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:30:32.971 19:45:58 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:30:32.971 19:45:58 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:30:32.971 19:45:58 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:32.971 19:45:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:32.971 19:45:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:32.971 19:45:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:32.971 ************************************ 00:30:32.971 START TEST nvmf_bdevperf 00:30:32.971 ************************************ 00:30:32.971 19:45:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:32.971 * Looking for test storage... 00:30:32.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.971 19:45:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:30:32.972 19:45:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:41.112 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:41.112 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:41.112 Found net devices under 0000:31:00.0: cvl_0_0 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.112 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:41.113 Found net devices under 0000:31:00.1: cvl_0_1 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.113 19:46:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:41.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:30:41.113 00:30:41.113 --- 10.0.0.2 ping statistics --- 00:30:41.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.113 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:30:41.113 00:30:41.113 --- 10.0.0.1 ping statistics --- 00:30:41.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.113 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3794909 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3794909 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3794909 ']' 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:41.113 19:46:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:41.113 [2024-05-15 19:46:07.281918] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:30:41.113 [2024-05-15 19:46:07.281981] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:41.373 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.373 [2024-05-15 19:46:07.363976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:41.373 [2024-05-15 19:46:07.438552] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:41.373 [2024-05-15 19:46:07.438589] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:41.373 [2024-05-15 19:46:07.438597] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:41.373 [2024-05-15 19:46:07.438603] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:41.373 [2024-05-15 19:46:07.438609] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:41.373 [2024-05-15 19:46:07.438861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:41.374 [2024-05-15 19:46:07.438989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.374 [2024-05-15 19:46:07.438990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:42.314 [2024-05-15 19:46:08.202881] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:42.314 Malloc0 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:42.314 [2024-05-15 19:46:08.270468] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:42.314 [2024-05-15 19:46:08.270695] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:42.314 { 00:30:42.314 "params": { 00:30:42.314 "name": "Nvme$subsystem", 00:30:42.314 "trtype": "$TEST_TRANSPORT", 00:30:42.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.314 "adrfam": "ipv4", 00:30:42.314 "trsvcid": "$NVMF_PORT", 00:30:42.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.314 "hdgst": ${hdgst:-false}, 00:30:42.314 "ddgst": ${ddgst:-false} 00:30:42.314 }, 00:30:42.314 "method": "bdev_nvme_attach_controller" 00:30:42.314 } 00:30:42.314 EOF 00:30:42.314 )") 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:42.314 19:46:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:42.314 "params": { 00:30:42.314 "name": "Nvme1", 00:30:42.314 "trtype": "tcp", 00:30:42.314 "traddr": "10.0.0.2", 00:30:42.314 "adrfam": "ipv4", 00:30:42.314 "trsvcid": "4420", 00:30:42.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:42.314 "hdgst": false, 00:30:42.314 "ddgst": false 00:30:42.314 }, 00:30:42.314 "method": "bdev_nvme_attach_controller" 00:30:42.314 }' 00:30:42.314 [2024-05-15 19:46:08.322637] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:30:42.314 [2024-05-15 19:46:08.322685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3795178 ] 00:30:42.314 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.314 [2024-05-15 19:46:08.403135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.314 [2024-05-15 19:46:08.467662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.574 Running I/O for 1 seconds... 00:30:43.514 00:30:43.514 Latency(us) 00:30:43.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.514 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:43.514 Verification LBA range: start 0x0 length 0x4000 00:30:43.514 Nvme1n1 : 1.01 8716.65 34.05 0.00 0.00 14617.99 3167.57 15400.96 00:30:43.514 =================================================================================================================== 00:30:43.514 Total : 8716.65 34.05 0.00 0.00 14617.99 3167.57 15400.96 00:30:43.774 19:46:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3795513 00:30:43.774 19:46:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:43.774 19:46:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:43.774 19:46:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:43.774 19:46:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:43.774 19:46:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:43.774 19:46:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:43.774 19:46:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:43.774 { 00:30:43.774 "params": { 00:30:43.774 "name": "Nvme$subsystem", 00:30:43.774 "trtype": "$TEST_TRANSPORT", 00:30:43.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:43.774 "adrfam": "ipv4", 00:30:43.774 "trsvcid": "$NVMF_PORT", 00:30:43.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:43.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:43.774 "hdgst": ${hdgst:-false}, 00:30:43.774 "ddgst": ${ddgst:-false} 00:30:43.774 }, 00:30:43.774 "method": "bdev_nvme_attach_controller" 00:30:43.774 } 00:30:43.775 EOF 00:30:43.775 )") 00:30:43.775 19:46:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:43.775 19:46:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:43.775 19:46:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:43.775 19:46:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:43.775 "params": { 00:30:43.775 "name": "Nvme1", 00:30:43.775 "trtype": "tcp", 00:30:43.775 "traddr": "10.0.0.2", 00:30:43.775 "adrfam": "ipv4", 00:30:43.775 "trsvcid": "4420", 00:30:43.775 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:43.775 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:43.775 "hdgst": false, 00:30:43.775 "ddgst": false 00:30:43.775 }, 00:30:43.775 "method": "bdev_nvme_attach_controller" 00:30:43.775 }' 00:30:43.775 [2024-05-15 19:46:09.840636] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:30:43.775 [2024-05-15 19:46:09.840694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3795513 ] 00:30:43.775 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.775 [2024-05-15 19:46:09.920900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.035 [2024-05-15 19:46:09.985317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.295 Running I/O for 15 seconds... 00:30:46.847 19:46:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3794909 00:30:46.847 19:46:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:46.847 [2024-05-15 19:46:12.811360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.847 [2024-05-15 19:46:12.811403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.847 [2024-05-15 19:46:12.811425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.847 [2024-05-15 19:46:12.811438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.847 [2024-05-15 19:46:12.811449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.847 [2024-05-15 19:46:12.811459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.847 [2024-05-15 19:46:12.811468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:52280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.847 [2024-05-15 19:46:12.811477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.847 [2024-05-15 19:46:12.811486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:52288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.847 [2024-05-15 19:46:12.811500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.847 [2024-05-15 19:46:12.811511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:52296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.847 [2024-05-15 19:46:12.811520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.847 [2024-05-15 19:46:12.811532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.847 [2024-05-15 19:46:12.811541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.847 [2024-05-15 19:46:12.811553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.848 [2024-05-15 19:46:12.811562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.848 [2024-05-15 19:46:12.811579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:52328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.848 [2024-05-15 19:46:12.811597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:52336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.848 [2024-05-15 19:46:12.811613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.848 [2024-05-15 19:46:12.811631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.848 [2024-05-15 19:46:12.811650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:52360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.848 [2024-05-15 19:46:12.811670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.848 [2024-05-15 19:46:12.811689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.848 [2024-05-15 19:46:12.811709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.848 [2024-05-15 19:46:12.811730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.848 [2024-05-15 19:46:12.811756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.848 [2024-05-15 19:46:12.811776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:52408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.848 [2024-05-15 19:46:12.811797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.848 [2024-05-15 19:46:12.811817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.848 [2024-05-15 19:46:12.811835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.848 [2024-05-15 19:46:12.811852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.848 [2024-05-15 19:46:12.811868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.848 [2024-05-15 19:46:12.811885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.848 [2024-05-15 19:46:12.811902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:52656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.848 [2024-05-15 19:46:12.811918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.848 [2024-05-15 19:46:12.811934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.848 [2024-05-15 19:46:12.811950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.848 [2024-05-15 19:46:12.811966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.848 [2024-05-15 19:46:12.811983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.811993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.848 [2024-05-15 19:46:12.812000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.812009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.848 [2024-05-15 19:46:12.812016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.812026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.848 [2024-05-15 19:46:12.812033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.848 [2024-05-15 19:46:12.812042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.849 [2024-05-15 19:46:12.812049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.849 [2024-05-15 19:46:12.812058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.849 [2024-05-15 19:46:12.812066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.849 [2024-05-15 19:46:12.812075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:52480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.849 [2024-05-15 19:46:12.812081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.849 [2024-05-15 19:46:12.812091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.849 [2024-05-15 19:46:12.812098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.849 [2024-05-15 19:46:12.812107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:52496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.849 [2024-05-15 19:46:12.812113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.849 [2024-05-15 19:46:12.812122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.849 [2024-05-15 19:46:12.812129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.849 [2024-05-15 19:46:12.812137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.849 [2024-05-15 19:46:12.812145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.849 [2024-05-15 19:46:12.812154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:52712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.849 [2024-05-15 19:46:12.812161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.849 [2024-05-15 19:46:12.812170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.849 [2024-05-15 19:46:12.812178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.849 [2024-05-15 19:46:12.812187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.849 [2024-05-15 19:46:12.812194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.849 [2024-05-15 19:46:12.812204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.849 [2024-05-15 19:46:12.812211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.849 [2024-05-15 19:46:12.812220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.849 [2024-05-15 19:46:12.812226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.849 [2024-05-15 19:46:12.812235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.849 [2024-05-15 19:46:12.812242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.849 [2024-05-15 19:46:12.812251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.849 [2024-05-15 19:46:12.812258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.849 [2024-05-15 19:46:12.812266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.849 [2024-05-15 19:46:12.812273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.849 [2024-05-15 19:46:12.812282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.849 [2024-05-15 19:46:12.812289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.849 [2024-05-15 19:46:12.812298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.849 [2024-05-15 19:46:12.812305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.849 [2024-05-15 19:46:12.812319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.849 [2024-05-15 19:46:12.812326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.849 [2024-05-15 19:46:12.812335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:52800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.849 [2024-05-15 19:46:12.812342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.849 [2024-05-15 19:46:12.812351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.849 [2024-05-15 19:46:12.812357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.850 [2024-05-15 19:46:12.812374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.850 [2024-05-15 19:46:12.812392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.850 [2024-05-15 19:46:12.812408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.850 [2024-05-15 19:46:12.812423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.850 [2024-05-15 19:46:12.812439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.850 [2024-05-15 19:46:12.812455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.850 [2024-05-15 19:46:12.812471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:52552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.850 [2024-05-15 19:46:12.812486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.850 [2024-05-15 19:46:12.812503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.850 [2024-05-15 19:46:12.812519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.850 [2024-05-15 19:46:12.812534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.850 [2024-05-15 19:46:12.812551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.850 [2024-05-15 19:46:12.812568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.850 [2024-05-15 19:46:12.812584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.850 [2024-05-15 19:46:12.812602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.850 [2024-05-15 19:46:12.812618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.850 [2024-05-15 19:46:12.812635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.850 [2024-05-15 19:46:12.812651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:52576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.850 [2024-05-15 19:46:12.812667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.850 [2024-05-15 19:46:12.812683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.850 [2024-05-15 19:46:12.812699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.850 [2024-05-15 19:46:12.812715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.850 [2024-05-15 19:46:12.812732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.850 [2024-05-15 19:46:12.812748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.850 [2024-05-15 19:46:12.812764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.850 [2024-05-15 19:46:12.812773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.850 [2024-05-15 19:46:12.812779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.851 [2024-05-15 19:46:12.812789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.851 [2024-05-15 19:46:12.812797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.851 [2024-05-15 19:46:12.812806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.851 [2024-05-15 19:46:12.812814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.851 [2024-05-15 19:46:12.812822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.851 [2024-05-15 19:46:12.812829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.851 [2024-05-15 19:46:12.812839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.851 [2024-05-15 19:46:12.812847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.851 [2024-05-15 19:46:12.812856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.851 [2024-05-15 19:46:12.812863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.851 [2024-05-15 19:46:12.812872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.851 [2024-05-15 19:46:12.812879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.851 [2024-05-15 19:46:12.812888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.851 [2024-05-15 19:46:12.812895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.851 [2024-05-15 19:46:12.812904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.851 [2024-05-15 19:46:12.812911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.851 [2024-05-15 19:46:12.812919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.851 [2024-05-15 19:46:12.812926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.851 [2024-05-15 19:46:12.812935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.851 [2024-05-15 19:46:12.812942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.812951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.812958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.812967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.812974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.812982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.812989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:53048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:53080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.852 [2024-05-15 19:46:12.813343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.852 [2024-05-15 19:46:12.813350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.853 [2024-05-15 19:46:12.813359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.853 [2024-05-15 19:46:12.813366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.853 [2024-05-15 19:46:12.813375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:53192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.853 [2024-05-15 19:46:12.813382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.853 [2024-05-15 19:46:12.813391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.853 [2024-05-15 19:46:12.813398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.853 [2024-05-15 19:46:12.813406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.853 [2024-05-15 19:46:12.813415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.853 [2024-05-15 19:46:12.813424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.853 [2024-05-15 19:46:12.813431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.853 [2024-05-15 19:46:12.813440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.853 [2024-05-15 19:46:12.813447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.853 [2024-05-15 19:46:12.813456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:53232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.853 [2024-05-15 19:46:12.813463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.853 [2024-05-15 19:46:12.813472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.853 [2024-05-15 19:46:12.813479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.853 [2024-05-15 19:46:12.813488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.853 [2024-05-15 19:46:12.813495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.853 [2024-05-15 19:46:12.813503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.853 [2024-05-15 19:46:12.813511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.853 [2024-05-15 19:46:12.813520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.853 [2024-05-15 19:46:12.813527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.853 [2024-05-15 19:46:12.813535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.853 [2024-05-15 19:46:12.813542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.853 [2024-05-15 19:46:12.813550] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c9550 is same with the state(5) to be set 00:30:46.853 [2024-05-15 19:46:12.813559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:46.853 [2024-05-15 19:46:12.813566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:46.853 [2024-05-15 19:46:12.813572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52624 len:8 PRP1 0x0 PRP2 0x0 00:30:46.853 [2024-05-15 19:46:12.813579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.853 [2024-05-15 19:46:12.813616] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19c9550 was disconnected and freed. reset controller. 00:30:46.853 [2024-05-15 19:46:12.813658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.853 [2024-05-15 19:46:12.813667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.853 [2024-05-15 19:46:12.813676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.853 [2024-05-15 19:46:12.813685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.853 [2024-05-15 19:46:12.813693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.853 [2024-05-15 19:46:12.813700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.853 [2024-05-15 19:46:12.813708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.853 [2024-05-15 19:46:12.813715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.853 [2024-05-15 19:46:12.813723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:46.853 [2024-05-15 19:46:12.817273] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.853 [2024-05-15 19:46:12.817292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:46.853 [2024-05-15 19:46:12.817999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.853 [2024-05-15 19:46:12.818561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.853 [2024-05-15 19:46:12.818597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:46.853 [2024-05-15 19:46:12.818608] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:46.853 [2024-05-15 19:46:12.818848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:46.853 [2024-05-15 19:46:12.819070] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.853 [2024-05-15 19:46:12.819079] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.853 [2024-05-15 19:46:12.819087] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.853 [2024-05-15 19:46:12.822643] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.853 [2024-05-15 19:46:12.831420] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.853 [2024-05-15 19:46:12.831905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.853 [2024-05-15 19:46:12.832298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.853 [2024-05-15 19:46:12.832308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:46.853 [2024-05-15 19:46:12.832325] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:46.853 [2024-05-15 19:46:12.832557] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:46.853 [2024-05-15 19:46:12.832777] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.854 [2024-05-15 19:46:12.832784] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.854 [2024-05-15 19:46:12.832792] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.854 [2024-05-15 19:46:12.836336] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.854 [2024-05-15 19:46:12.845323] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.854 [2024-05-15 19:46:12.845917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.854 [2024-05-15 19:46:12.846324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.854 [2024-05-15 19:46:12.846342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:46.854 [2024-05-15 19:46:12.846350] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:46.854 [2024-05-15 19:46:12.846569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:46.854 [2024-05-15 19:46:12.846787] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.854 [2024-05-15 19:46:12.846795] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.854 [2024-05-15 19:46:12.846802] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.854 [2024-05-15 19:46:12.850348] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.854 [2024-05-15 19:46:12.859118] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.854 [2024-05-15 19:46:12.859792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.854 [2024-05-15 19:46:12.860182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.854 [2024-05-15 19:46:12.860195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:46.854 [2024-05-15 19:46:12.860204] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:46.854 [2024-05-15 19:46:12.860451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:46.854 [2024-05-15 19:46:12.860674] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.854 [2024-05-15 19:46:12.860682] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.854 [2024-05-15 19:46:12.860690] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.854 [2024-05-15 19:46:12.864243] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.854 [2024-05-15 19:46:12.873027] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.854 [2024-05-15 19:46:12.873650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.854 [2024-05-15 19:46:12.874027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.854 [2024-05-15 19:46:12.874037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:46.854 [2024-05-15 19:46:12.874045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:46.854 [2024-05-15 19:46:12.874264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:46.854 [2024-05-15 19:46:12.874491] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.854 [2024-05-15 19:46:12.874499] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.854 [2024-05-15 19:46:12.874506] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.854 [2024-05-15 19:46:12.878048] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.854 [2024-05-15 19:46:12.886843] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.854 [2024-05-15 19:46:12.887636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.854 [2024-05-15 19:46:12.887996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.854 [2024-05-15 19:46:12.888009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:46.854 [2024-05-15 19:46:12.888023] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:46.854 [2024-05-15 19:46:12.888264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:46.854 [2024-05-15 19:46:12.888497] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.854 [2024-05-15 19:46:12.888506] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.854 [2024-05-15 19:46:12.888514] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.854 [2024-05-15 19:46:12.892061] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.854 [2024-05-15 19:46:12.900827] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.854 [2024-05-15 19:46:12.901544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.854 [2024-05-15 19:46:12.901901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.854 [2024-05-15 19:46:12.901914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:46.854 [2024-05-15 19:46:12.901924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:46.854 [2024-05-15 19:46:12.902166] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:46.854 [2024-05-15 19:46:12.902394] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.854 [2024-05-15 19:46:12.902404] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.854 [2024-05-15 19:46:12.902412] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.854 [2024-05-15 19:46:12.905962] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.854 [2024-05-15 19:46:12.914734] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.854 [2024-05-15 19:46:12.915247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.854 [2024-05-15 19:46:12.915601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.854 [2024-05-15 19:46:12.915612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:46.854 [2024-05-15 19:46:12.915620] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:46.854 [2024-05-15 19:46:12.915839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:46.854 [2024-05-15 19:46:12.916059] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.855 [2024-05-15 19:46:12.916068] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.855 [2024-05-15 19:46:12.916075] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.855 [2024-05-15 19:46:12.919620] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.855 [2024-05-15 19:46:12.928595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.855 [2024-05-15 19:46:12.929213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.855 [2024-05-15 19:46:12.929583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.855 [2024-05-15 19:46:12.929595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:46.855 [2024-05-15 19:46:12.929602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:46.855 [2024-05-15 19:46:12.929828] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:46.855 [2024-05-15 19:46:12.930047] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.855 [2024-05-15 19:46:12.930056] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.855 [2024-05-15 19:46:12.930063] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.855 [2024-05-15 19:46:12.933608] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.855 [2024-05-15 19:46:12.942584] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.855 [2024-05-15 19:46:12.943361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.855 [2024-05-15 19:46:12.943856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.855 [2024-05-15 19:46:12.943870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:46.855 [2024-05-15 19:46:12.943880] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:46.855 [2024-05-15 19:46:12.944125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:46.855 [2024-05-15 19:46:12.944370] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.855 [2024-05-15 19:46:12.944380] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.856 [2024-05-15 19:46:12.944387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.856 [2024-05-15 19:46:12.947944] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.856 [2024-05-15 19:46:12.956518] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.856 [2024-05-15 19:46:12.957268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.856 [2024-05-15 19:46:12.957695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.856 [2024-05-15 19:46:12.957710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:46.856 [2024-05-15 19:46:12.957720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:46.856 [2024-05-15 19:46:12.957967] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:46.856 [2024-05-15 19:46:12.958191] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.856 [2024-05-15 19:46:12.958199] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.856 [2024-05-15 19:46:12.958207] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.856 [2024-05-15 19:46:12.961767] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.856 [2024-05-15 19:46:12.970341] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.856 [2024-05-15 19:46:12.971108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.856 [2024-05-15 19:46:12.971425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.856 [2024-05-15 19:46:12.971452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:46.856 [2024-05-15 19:46:12.971464] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:46.856 [2024-05-15 19:46:12.971712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:46.856 [2024-05-15 19:46:12.971943] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.856 [2024-05-15 19:46:12.971951] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.856 [2024-05-15 19:46:12.971959] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.856 [2024-05-15 19:46:12.975529] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.856 [2024-05-15 19:46:12.984328] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.856 [2024-05-15 19:46:12.984977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.856 [2024-05-15 19:46:12.985299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.856 [2024-05-15 19:46:12.985309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:46.856 [2024-05-15 19:46:12.985326] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:46.856 [2024-05-15 19:46:12.985549] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:46.856 [2024-05-15 19:46:12.985769] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.856 [2024-05-15 19:46:12.985778] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.856 [2024-05-15 19:46:12.985786] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.856 [2024-05-15 19:46:12.989346] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.856 [2024-05-15 19:46:12.998132] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.856 [2024-05-15 19:46:12.998866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.856 [2024-05-15 19:46:12.999330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.856 [2024-05-15 19:46:12.999346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:46.856 [2024-05-15 19:46:12.999357] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:46.856 [2024-05-15 19:46:12.999608] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:46.856 [2024-05-15 19:46:12.999833] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.856 [2024-05-15 19:46:12.999842] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.856 [2024-05-15 19:46:12.999849] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.856 [2024-05-15 19:46:13.003420] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.856 [2024-05-15 19:46:13.012007] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.856 [2024-05-15 19:46:13.012755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.856 [2024-05-15 19:46:13.013265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.856 [2024-05-15 19:46:13.013280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:46.856 [2024-05-15 19:46:13.013292] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:46.856 [2024-05-15 19:46:13.013559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:46.856 [2024-05-15 19:46:13.013786] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.856 [2024-05-15 19:46:13.013803] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.856 [2024-05-15 19:46:13.013811] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.856 [2024-05-15 19:46:13.017377] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.123 [2024-05-15 19:46:13.025964] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.123 [2024-05-15 19:46:13.026602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.123 [2024-05-15 19:46:13.027121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.123 [2024-05-15 19:46:13.027136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.123 [2024-05-15 19:46:13.027147] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.123 [2024-05-15 19:46:13.027412] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.123 [2024-05-15 19:46:13.027639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.123 [2024-05-15 19:46:13.027647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.123 [2024-05-15 19:46:13.027655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.123 [2024-05-15 19:46:13.031221] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.123 [2024-05-15 19:46:13.039802] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.123 [2024-05-15 19:46:13.040476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.123 [2024-05-15 19:46:13.040906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.123 [2024-05-15 19:46:13.040919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.123 [2024-05-15 19:46:13.040929] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.123 [2024-05-15 19:46:13.041170] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.123 [2024-05-15 19:46:13.041401] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.123 [2024-05-15 19:46:13.041410] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.123 [2024-05-15 19:46:13.041418] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.123 [2024-05-15 19:46:13.045001] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.123 [2024-05-15 19:46:13.053788] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.123 [2024-05-15 19:46:13.054440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.123 [2024-05-15 19:46:13.054933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.123 [2024-05-15 19:46:13.054948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.123 [2024-05-15 19:46:13.054959] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.123 [2024-05-15 19:46:13.055212] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.123 [2024-05-15 19:46:13.055450] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.123 [2024-05-15 19:46:13.055460] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.123 [2024-05-15 19:46:13.055476] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.123 [2024-05-15 19:46:13.059045] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.123 [2024-05-15 19:46:13.067631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.123 [2024-05-15 19:46:13.068365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.123 [2024-05-15 19:46:13.068901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.123 [2024-05-15 19:46:13.068917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.123 [2024-05-15 19:46:13.068928] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.123 [2024-05-15 19:46:13.069183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.123 [2024-05-15 19:46:13.069422] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.123 [2024-05-15 19:46:13.069433] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.123 [2024-05-15 19:46:13.069442] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.123 [2024-05-15 19:46:13.073014] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.123 [2024-05-15 19:46:13.081609] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.123 [2024-05-15 19:46:13.082377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.123 [2024-05-15 19:46:13.082847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.123 [2024-05-15 19:46:13.082862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.123 [2024-05-15 19:46:13.082874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.123 [2024-05-15 19:46:13.083128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.123 [2024-05-15 19:46:13.083367] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.123 [2024-05-15 19:46:13.083379] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.123 [2024-05-15 19:46:13.083388] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.123 [2024-05-15 19:46:13.086957] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.123 [2024-05-15 19:46:13.095547] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.123 [2024-05-15 19:46:13.096162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.123 [2024-05-15 19:46:13.096693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.123 [2024-05-15 19:46:13.096756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.123 [2024-05-15 19:46:13.096769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.123 [2024-05-15 19:46:13.097024] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.123 [2024-05-15 19:46:13.097250] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.123 [2024-05-15 19:46:13.097259] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.123 [2024-05-15 19:46:13.097267] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.123 [2024-05-15 19:46:13.100853] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.123 [2024-05-15 19:46:13.109441] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.123 [2024-05-15 19:46:13.110153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.123 [2024-05-15 19:46:13.110652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.123 [2024-05-15 19:46:13.110713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.123 [2024-05-15 19:46:13.110725] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.123 [2024-05-15 19:46:13.110980] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.123 [2024-05-15 19:46:13.111205] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.123 [2024-05-15 19:46:13.111215] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.123 [2024-05-15 19:46:13.111223] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.123 [2024-05-15 19:46:13.114797] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.123 [2024-05-15 19:46:13.123375] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.123 [2024-05-15 19:46:13.124067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.123 [2024-05-15 19:46:13.124602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.123 [2024-05-15 19:46:13.124663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.123 [2024-05-15 19:46:13.124676] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.123 [2024-05-15 19:46:13.124930] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.123 [2024-05-15 19:46:13.125156] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.123 [2024-05-15 19:46:13.125165] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.123 [2024-05-15 19:46:13.125173] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.123 [2024-05-15 19:46:13.128747] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.123 [2024-05-15 19:46:13.137321] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.123 [2024-05-15 19:46:13.137971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.124 [2024-05-15 19:46:13.138569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.124 [2024-05-15 19:46:13.138630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.124 [2024-05-15 19:46:13.138643] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.124 [2024-05-15 19:46:13.138897] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.124 [2024-05-15 19:46:13.139124] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.124 [2024-05-15 19:46:13.139132] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.124 [2024-05-15 19:46:13.139140] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.124 [2024-05-15 19:46:13.142721] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.124 [2024-05-15 19:46:13.151120] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.124 [2024-05-15 19:46:13.151833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.124 [2024-05-15 19:46:13.152274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.124 [2024-05-15 19:46:13.152288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.124 [2024-05-15 19:46:13.152299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.124 [2024-05-15 19:46:13.152564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.124 [2024-05-15 19:46:13.152791] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.124 [2024-05-15 19:46:13.152799] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.124 [2024-05-15 19:46:13.152807] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.124 [2024-05-15 19:46:13.156373] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.124 [2024-05-15 19:46:13.164947] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.124 [2024-05-15 19:46:13.165697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.124 [2024-05-15 19:46:13.166162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.124 [2024-05-15 19:46:13.166176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.124 [2024-05-15 19:46:13.166187] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.124 [2024-05-15 19:46:13.166454] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.124 [2024-05-15 19:46:13.166681] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.124 [2024-05-15 19:46:13.166689] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.124 [2024-05-15 19:46:13.166697] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.124 [2024-05-15 19:46:13.170254] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.124 [2024-05-15 19:46:13.178823] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.124 [2024-05-15 19:46:13.179601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.124 [2024-05-15 19:46:13.180051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.124 [2024-05-15 19:46:13.180066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.124 [2024-05-15 19:46:13.180077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.124 [2024-05-15 19:46:13.180340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.124 [2024-05-15 19:46:13.180568] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.124 [2024-05-15 19:46:13.180576] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.124 [2024-05-15 19:46:13.180584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.124 [2024-05-15 19:46:13.184158] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.124 [2024-05-15 19:46:13.192733] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.124 [2024-05-15 19:46:13.193460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.124 [2024-05-15 19:46:13.193826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.124 [2024-05-15 19:46:13.193842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.124 [2024-05-15 19:46:13.193853] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.124 [2024-05-15 19:46:13.194107] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.124 [2024-05-15 19:46:13.194346] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.124 [2024-05-15 19:46:13.194356] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.124 [2024-05-15 19:46:13.194364] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.124 [2024-05-15 19:46:13.197929] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.124 [2024-05-15 19:46:13.206712] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.124 [2024-05-15 19:46:13.207414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.124 [2024-05-15 19:46:13.207857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.124 [2024-05-15 19:46:13.207871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.124 [2024-05-15 19:46:13.207882] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.124 [2024-05-15 19:46:13.208136] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.124 [2024-05-15 19:46:13.208376] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.124 [2024-05-15 19:46:13.208386] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.124 [2024-05-15 19:46:13.208394] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.124 [2024-05-15 19:46:13.211958] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.124 [2024-05-15 19:46:13.220527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.124 [2024-05-15 19:46:13.221177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.124 [2024-05-15 19:46:13.221615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.124 [2024-05-15 19:46:13.221632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.124 [2024-05-15 19:46:13.221643] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.124 [2024-05-15 19:46:13.221897] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.124 [2024-05-15 19:46:13.222122] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.124 [2024-05-15 19:46:13.222131] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.124 [2024-05-15 19:46:13.222139] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.124 [2024-05-15 19:46:13.225707] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.124 [2024-05-15 19:46:13.234492] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.124 [2024-05-15 19:46:13.235238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.124 [2024-05-15 19:46:13.235713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.124 [2024-05-15 19:46:13.235729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.124 [2024-05-15 19:46:13.235740] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.124 [2024-05-15 19:46:13.235993] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.124 [2024-05-15 19:46:13.236219] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.124 [2024-05-15 19:46:13.236227] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.124 [2024-05-15 19:46:13.236235] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.124 [2024-05-15 19:46:13.239801] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.124 [2024-05-15 19:46:13.248393] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.124 [2024-05-15 19:46:13.249111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.124 [2024-05-15 19:46:13.249620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.124 [2024-05-15 19:46:13.249637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.124 [2024-05-15 19:46:13.249648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.124 [2024-05-15 19:46:13.249903] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.124 [2024-05-15 19:46:13.250128] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.124 [2024-05-15 19:46:13.250136] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.124 [2024-05-15 19:46:13.250144] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.124 [2024-05-15 19:46:13.253716] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.124 [2024-05-15 19:46:13.262283] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.124 [2024-05-15 19:46:13.263006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.124 [2024-05-15 19:46:13.263469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.124 [2024-05-15 19:46:13.263486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.125 [2024-05-15 19:46:13.263497] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.125 [2024-05-15 19:46:13.263750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.125 [2024-05-15 19:46:13.263975] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.125 [2024-05-15 19:46:13.263983] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.125 [2024-05-15 19:46:13.263991] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.125 [2024-05-15 19:46:13.267556] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.125 [2024-05-15 19:46:13.276127] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.125 [2024-05-15 19:46:13.276911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.125 [2024-05-15 19:46:13.277386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.125 [2024-05-15 19:46:13.277403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.125 [2024-05-15 19:46:13.277420] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.125 [2024-05-15 19:46:13.277674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.125 [2024-05-15 19:46:13.277900] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.125 [2024-05-15 19:46:13.277908] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.125 [2024-05-15 19:46:13.277916] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.125 [2024-05-15 19:46:13.281483] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.125 [2024-05-15 19:46:13.290055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.125 [2024-05-15 19:46:13.290801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.125 [2024-05-15 19:46:13.291261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.125 [2024-05-15 19:46:13.291275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.125 [2024-05-15 19:46:13.291286] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.125 [2024-05-15 19:46:13.291552] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.125 [2024-05-15 19:46:13.291779] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.125 [2024-05-15 19:46:13.291787] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.125 [2024-05-15 19:46:13.291795] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.125 [2024-05-15 19:46:13.295361] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.125 [2024-05-15 19:46:13.303938] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.125 [2024-05-15 19:46:13.304708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.125 [2024-05-15 19:46:13.305208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.125 [2024-05-15 19:46:13.305223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.125 [2024-05-15 19:46:13.305234] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.389 [2024-05-15 19:46:13.305506] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.389 [2024-05-15 19:46:13.305738] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.389 [2024-05-15 19:46:13.305750] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.389 [2024-05-15 19:46:13.305758] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.389 [2024-05-15 19:46:13.309327] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.389 [2024-05-15 19:46:13.317911] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.389 [2024-05-15 19:46:13.318657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.319118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.319133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.389 [2024-05-15 19:46:13.319144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.389 [2024-05-15 19:46:13.319417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.389 [2024-05-15 19:46:13.319644] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.389 [2024-05-15 19:46:13.319653] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.389 [2024-05-15 19:46:13.319661] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.389 [2024-05-15 19:46:13.323229] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.389 [2024-05-15 19:46:13.331809] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.389 [2024-05-15 19:46:13.332550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.333059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.333073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.389 [2024-05-15 19:46:13.333084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.389 [2024-05-15 19:46:13.333346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.389 [2024-05-15 19:46:13.333572] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.389 [2024-05-15 19:46:13.333581] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.389 [2024-05-15 19:46:13.333589] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.389 [2024-05-15 19:46:13.337159] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.389 [2024-05-15 19:46:13.345750] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.389 [2024-05-15 19:46:13.346558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.347132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.347146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.389 [2024-05-15 19:46:13.347158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.389 [2024-05-15 19:46:13.347423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.389 [2024-05-15 19:46:13.347650] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.389 [2024-05-15 19:46:13.347658] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.389 [2024-05-15 19:46:13.347667] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.389 [2024-05-15 19:46:13.351229] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.389 [2024-05-15 19:46:13.359598] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.389 [2024-05-15 19:46:13.360360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.360846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.360862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.389 [2024-05-15 19:46:13.360874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.389 [2024-05-15 19:46:13.361128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.389 [2024-05-15 19:46:13.361373] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.389 [2024-05-15 19:46:13.361383] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.389 [2024-05-15 19:46:13.361391] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.389 [2024-05-15 19:46:13.364960] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.389 [2024-05-15 19:46:13.373538] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.389 [2024-05-15 19:46:13.374240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.374723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.374739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.389 [2024-05-15 19:46:13.374750] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.389 [2024-05-15 19:46:13.375004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.389 [2024-05-15 19:46:13.375230] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.389 [2024-05-15 19:46:13.375239] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.389 [2024-05-15 19:46:13.375247] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.389 [2024-05-15 19:46:13.378813] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.389 [2024-05-15 19:46:13.387380] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.389 [2024-05-15 19:46:13.388103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.388567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.388584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.389 [2024-05-15 19:46:13.388596] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.389 [2024-05-15 19:46:13.388849] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.389 [2024-05-15 19:46:13.389074] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.389 [2024-05-15 19:46:13.389083] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.389 [2024-05-15 19:46:13.389090] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.389 [2024-05-15 19:46:13.392660] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.389 [2024-05-15 19:46:13.401233] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.389 [2024-05-15 19:46:13.402011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.402462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.402479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.389 [2024-05-15 19:46:13.402490] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.389 [2024-05-15 19:46:13.402744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.389 [2024-05-15 19:46:13.402969] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.389 [2024-05-15 19:46:13.402985] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.389 [2024-05-15 19:46:13.402993] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.389 [2024-05-15 19:46:13.406558] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.389 [2024-05-15 19:46:13.415137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.389 [2024-05-15 19:46:13.415792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.416299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.416325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.389 [2024-05-15 19:46:13.416337] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.389 [2024-05-15 19:46:13.416590] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.389 [2024-05-15 19:46:13.416815] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.389 [2024-05-15 19:46:13.416824] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.389 [2024-05-15 19:46:13.416832] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.389 [2024-05-15 19:46:13.420399] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.389 [2024-05-15 19:46:13.429013] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.389 [2024-05-15 19:46:13.429664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.430114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.430129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.389 [2024-05-15 19:46:13.430140] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.389 [2024-05-15 19:46:13.430405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.389 [2024-05-15 19:46:13.430633] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.389 [2024-05-15 19:46:13.430643] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.389 [2024-05-15 19:46:13.430651] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.389 [2024-05-15 19:46:13.434219] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.389 [2024-05-15 19:46:13.442996] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.389 [2024-05-15 19:46:13.443736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.444191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.444206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.389 [2024-05-15 19:46:13.444217] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.389 [2024-05-15 19:46:13.444484] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.389 [2024-05-15 19:46:13.444711] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.389 [2024-05-15 19:46:13.444720] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.389 [2024-05-15 19:46:13.444737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.389 [2024-05-15 19:46:13.448327] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.389 [2024-05-15 19:46:13.456901] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.389 [2024-05-15 19:46:13.457630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.458151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.458166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.389 [2024-05-15 19:46:13.458177] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.389 [2024-05-15 19:46:13.458445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.389 [2024-05-15 19:46:13.458672] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.389 [2024-05-15 19:46:13.458681] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.389 [2024-05-15 19:46:13.458689] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.389 [2024-05-15 19:46:13.462255] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.389 [2024-05-15 19:46:13.470828] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.389 [2024-05-15 19:46:13.471558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.471927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.471941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.389 [2024-05-15 19:46:13.471952] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.389 [2024-05-15 19:46:13.472206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.389 [2024-05-15 19:46:13.472441] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.389 [2024-05-15 19:46:13.472452] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.389 [2024-05-15 19:46:13.472460] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.389 [2024-05-15 19:46:13.476023] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.389 [2024-05-15 19:46:13.484803] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.389 [2024-05-15 19:46:13.485621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.486061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.486076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.389 [2024-05-15 19:46:13.486086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.389 [2024-05-15 19:46:13.486354] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.389 [2024-05-15 19:46:13.486581] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.389 [2024-05-15 19:46:13.486590] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.389 [2024-05-15 19:46:13.486598] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.389 [2024-05-15 19:46:13.490187] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.389 [2024-05-15 19:46:13.498678] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.389 [2024-05-15 19:46:13.499424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.389 [2024-05-15 19:46:13.499868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.390 [2024-05-15 19:46:13.499882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.390 [2024-05-15 19:46:13.499893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.390 [2024-05-15 19:46:13.500147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.390 [2024-05-15 19:46:13.500385] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.390 [2024-05-15 19:46:13.500395] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.390 [2024-05-15 19:46:13.500403] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.390 [2024-05-15 19:46:13.503968] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.390 [2024-05-15 19:46:13.512541] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.390 [2024-05-15 19:46:13.513272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.390 [2024-05-15 19:46:13.513821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.390 [2024-05-15 19:46:13.513837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.390 [2024-05-15 19:46:13.513848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.390 [2024-05-15 19:46:13.514102] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.390 [2024-05-15 19:46:13.514338] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.390 [2024-05-15 19:46:13.514347] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.390 [2024-05-15 19:46:13.514355] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.390 [2024-05-15 19:46:13.517919] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.390 [2024-05-15 19:46:13.526487] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.390 [2024-05-15 19:46:13.527260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.390 [2024-05-15 19:46:13.527724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.390 [2024-05-15 19:46:13.527740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.390 [2024-05-15 19:46:13.527751] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.390 [2024-05-15 19:46:13.528004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.390 [2024-05-15 19:46:13.528229] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.390 [2024-05-15 19:46:13.528238] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.390 [2024-05-15 19:46:13.528246] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.390 [2024-05-15 19:46:13.531819] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.390 [2024-05-15 19:46:13.540399] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.390 [2024-05-15 19:46:13.541124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.390 [2024-05-15 19:46:13.541540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.390 [2024-05-15 19:46:13.541558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.390 [2024-05-15 19:46:13.541569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.390 [2024-05-15 19:46:13.541823] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.390 [2024-05-15 19:46:13.542049] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.390 [2024-05-15 19:46:13.542059] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.390 [2024-05-15 19:46:13.542067] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.390 [2024-05-15 19:46:13.545635] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.390 [2024-05-15 19:46:13.554227] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.390 [2024-05-15 19:46:13.554972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.390 [2024-05-15 19:46:13.555430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.390 [2024-05-15 19:46:13.555447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.390 [2024-05-15 19:46:13.555458] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.390 [2024-05-15 19:46:13.555712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.390 [2024-05-15 19:46:13.555939] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.390 [2024-05-15 19:46:13.555947] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.390 [2024-05-15 19:46:13.555955] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.390 [2024-05-15 19:46:13.559525] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.390 [2024-05-15 19:46:13.568099] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.390 [2024-05-15 19:46:13.568850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.390 [2024-05-15 19:46:13.570270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.390 [2024-05-15 19:46:13.570290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.390 [2024-05-15 19:46:13.570302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.390 [2024-05-15 19:46:13.570566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.390 [2024-05-15 19:46:13.570794] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.390 [2024-05-15 19:46:13.570803] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.390 [2024-05-15 19:46:13.570811] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.650 [2024-05-15 19:46:13.574394] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.650 [2024-05-15 19:46:13.581927] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.650 [2024-05-15 19:46:13.582664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.650 [2024-05-15 19:46:13.583118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.650 [2024-05-15 19:46:13.583133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.650 [2024-05-15 19:46:13.583144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.650 [2024-05-15 19:46:13.583407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.650 [2024-05-15 19:46:13.583634] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.650 [2024-05-15 19:46:13.583643] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.650 [2024-05-15 19:46:13.583651] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.650 [2024-05-15 19:46:13.587212] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.650 [2024-05-15 19:46:13.595788] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.650 [2024-05-15 19:46:13.596571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.650 [2024-05-15 19:46:13.597029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.650 [2024-05-15 19:46:13.597045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.650 [2024-05-15 19:46:13.597057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.650 [2024-05-15 19:46:13.597311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.651 [2024-05-15 19:46:13.597554] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.651 [2024-05-15 19:46:13.597562] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.651 [2024-05-15 19:46:13.597570] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.651 [2024-05-15 19:46:13.601134] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.651 [2024-05-15 19:46:13.609993] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.651 [2024-05-15 19:46:13.610742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.611204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.611219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.651 [2024-05-15 19:46:13.611230] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.651 [2024-05-15 19:46:13.611498] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.651 [2024-05-15 19:46:13.611725] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.651 [2024-05-15 19:46:13.611733] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.651 [2024-05-15 19:46:13.611741] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.651 [2024-05-15 19:46:13.615302] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.651 [2024-05-15 19:46:13.623870] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.651 [2024-05-15 19:46:13.624606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.625080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.625095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.651 [2024-05-15 19:46:13.625105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.651 [2024-05-15 19:46:13.625373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.651 [2024-05-15 19:46:13.625600] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.651 [2024-05-15 19:46:13.625608] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.651 [2024-05-15 19:46:13.625616] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.651 [2024-05-15 19:46:13.629176] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.651 [2024-05-15 19:46:13.637745] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.651 [2024-05-15 19:46:13.638460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.638905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.638919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.651 [2024-05-15 19:46:13.638930] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.651 [2024-05-15 19:46:13.639180] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.651 [2024-05-15 19:46:13.639421] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.651 [2024-05-15 19:46:13.639431] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.651 [2024-05-15 19:46:13.639439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.651 [2024-05-15 19:46:13.642999] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.651 [2024-05-15 19:46:13.651587] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.651 [2024-05-15 19:46:13.652353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.652835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.652850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.651 [2024-05-15 19:46:13.652861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.651 [2024-05-15 19:46:13.653115] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.651 [2024-05-15 19:46:13.653354] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.651 [2024-05-15 19:46:13.653364] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.651 [2024-05-15 19:46:13.653372] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.651 [2024-05-15 19:46:13.656932] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.651 [2024-05-15 19:46:13.665522] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.651 [2024-05-15 19:46:13.666260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.666687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.666703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.651 [2024-05-15 19:46:13.666721] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.651 [2024-05-15 19:46:13.666976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.651 [2024-05-15 19:46:13.667202] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.651 [2024-05-15 19:46:13.667212] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.651 [2024-05-15 19:46:13.667219] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.651 [2024-05-15 19:46:13.670799] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.651 [2024-05-15 19:46:13.679371] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.651 [2024-05-15 19:46:13.680096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.680555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.680572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.651 [2024-05-15 19:46:13.680583] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.651 [2024-05-15 19:46:13.680838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.651 [2024-05-15 19:46:13.681064] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.651 [2024-05-15 19:46:13.681072] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.651 [2024-05-15 19:46:13.681080] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.651 [2024-05-15 19:46:13.684649] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.651 [2024-05-15 19:46:13.693212] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.651 [2024-05-15 19:46:13.693924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.694380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.694396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.651 [2024-05-15 19:46:13.694407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.651 [2024-05-15 19:46:13.694662] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.651 [2024-05-15 19:46:13.694887] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.651 [2024-05-15 19:46:13.694896] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.651 [2024-05-15 19:46:13.694904] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.651 [2024-05-15 19:46:13.698471] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.651 [2024-05-15 19:46:13.707045] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.651 [2024-05-15 19:46:13.707787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.708261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.708275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.651 [2024-05-15 19:46:13.708286] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.651 [2024-05-15 19:46:13.708559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.651 [2024-05-15 19:46:13.708785] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.651 [2024-05-15 19:46:13.708793] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.651 [2024-05-15 19:46:13.708801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.651 [2024-05-15 19:46:13.712363] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.651 [2024-05-15 19:46:13.720932] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.651 [2024-05-15 19:46:13.721671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.722125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.722140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.651 [2024-05-15 19:46:13.722151] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.651 [2024-05-15 19:46:13.722416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.651 [2024-05-15 19:46:13.722643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.651 [2024-05-15 19:46:13.722651] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.651 [2024-05-15 19:46:13.722659] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.651 [2024-05-15 19:46:13.726218] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.651 [2024-05-15 19:46:13.734791] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.651 [2024-05-15 19:46:13.735450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.735896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.735910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.651 [2024-05-15 19:46:13.735921] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.651 [2024-05-15 19:46:13.736175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.651 [2024-05-15 19:46:13.736414] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.651 [2024-05-15 19:46:13.736423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.651 [2024-05-15 19:46:13.736432] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.651 [2024-05-15 19:46:13.739995] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.651 [2024-05-15 19:46:13.748791] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.651 [2024-05-15 19:46:13.749452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.749902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.651 [2024-05-15 19:46:13.749916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.651 [2024-05-15 19:46:13.749927] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.651 [2024-05-15 19:46:13.750181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.651 [2024-05-15 19:46:13.750427] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.651 [2024-05-15 19:46:13.750437] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.651 [2024-05-15 19:46:13.750445] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.652 [2024-05-15 19:46:13.754008] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.652 [2024-05-15 19:46:13.762790] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.652 [2024-05-15 19:46:13.763447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.652 [2024-05-15 19:46:13.763903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.652 [2024-05-15 19:46:13.763917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.652 [2024-05-15 19:46:13.763929] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.652 [2024-05-15 19:46:13.764182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.652 [2024-05-15 19:46:13.764422] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.652 [2024-05-15 19:46:13.764431] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.652 [2024-05-15 19:46:13.764439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.652 [2024-05-15 19:46:13.768006] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.652 [2024-05-15 19:46:13.776782] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.652 [2024-05-15 19:46:13.777433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.652 [2024-05-15 19:46:13.777950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.652 [2024-05-15 19:46:13.777965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.652 [2024-05-15 19:46:13.777976] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.652 [2024-05-15 19:46:13.778231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.652 [2024-05-15 19:46:13.778468] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.652 [2024-05-15 19:46:13.778479] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.652 [2024-05-15 19:46:13.778487] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.652 [2024-05-15 19:46:13.782049] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.652 [2024-05-15 19:46:13.790620] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.652 [2024-05-15 19:46:13.791401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.652 [2024-05-15 19:46:13.791835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.652 [2024-05-15 19:46:13.791849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.652 [2024-05-15 19:46:13.791860] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.652 [2024-05-15 19:46:13.792114] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.652 [2024-05-15 19:46:13.792353] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.652 [2024-05-15 19:46:13.792369] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.652 [2024-05-15 19:46:13.792377] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.652 [2024-05-15 19:46:13.795940] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.652 [2024-05-15 19:46:13.804502] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.652 [2024-05-15 19:46:13.805199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.652 [2024-05-15 19:46:13.805482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.652 [2024-05-15 19:46:13.805501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.652 [2024-05-15 19:46:13.805514] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.652 [2024-05-15 19:46:13.805768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.652 [2024-05-15 19:46:13.805994] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.652 [2024-05-15 19:46:13.806002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.652 [2024-05-15 19:46:13.806010] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.652 [2024-05-15 19:46:13.809578] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.652 [2024-05-15 19:46:13.818362] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.652 [2024-05-15 19:46:13.819105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.652 [2024-05-15 19:46:13.819575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.652 [2024-05-15 19:46:13.819593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.652 [2024-05-15 19:46:13.819604] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.652 [2024-05-15 19:46:13.819858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.652 [2024-05-15 19:46:13.820084] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.652 [2024-05-15 19:46:13.820094] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.652 [2024-05-15 19:46:13.820102] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.652 [2024-05-15 19:46:13.823676] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.652 [2024-05-15 19:46:13.832254] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.652 [2024-05-15 19:46:13.832807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.652 [2024-05-15 19:46:13.833221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.652 [2024-05-15 19:46:13.833231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.652 [2024-05-15 19:46:13.833239] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.652 [2024-05-15 19:46:13.833467] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.652 [2024-05-15 19:46:13.833688] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.652 [2024-05-15 19:46:13.833696] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.652 [2024-05-15 19:46:13.833712] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.913 [2024-05-15 19:46:13.837265] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.913 [2024-05-15 19:46:13.846241] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.913 [2024-05-15 19:46:13.846868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-05-15 19:46:13.847257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-05-15 19:46:13.847267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.913 [2024-05-15 19:46:13.847275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.913 [2024-05-15 19:46:13.847517] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.913 [2024-05-15 19:46:13.847738] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.913 [2024-05-15 19:46:13.847748] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.913 [2024-05-15 19:46:13.847755] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.913 [2024-05-15 19:46:13.851298] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.913 [2024-05-15 19:46:13.860066] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.913 [2024-05-15 19:46:13.860782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-05-15 19:46:13.861244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-05-15 19:46:13.861258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.913 [2024-05-15 19:46:13.861270] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.913 [2024-05-15 19:46:13.861537] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.913 [2024-05-15 19:46:13.861763] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.913 [2024-05-15 19:46:13.861772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.913 [2024-05-15 19:46:13.861780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.913 [2024-05-15 19:46:13.865349] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.913 [2024-05-15 19:46:13.873931] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.913 [2024-05-15 19:46:13.874664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.913 [2024-05-15 19:46:13.875118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-05-15 19:46:13.875133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.914 [2024-05-15 19:46:13.875144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.914 [2024-05-15 19:46:13.875410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.914 [2024-05-15 19:46:13.875636] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.914 [2024-05-15 19:46:13.875647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.914 [2024-05-15 19:46:13.875655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.914 [2024-05-15 19:46:13.879365] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.914 [2024-05-15 19:46:13.887737] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.914 [2024-05-15 19:46:13.888560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-05-15 19:46:13.889023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-05-15 19:46:13.889038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.914 [2024-05-15 19:46:13.889049] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.914 [2024-05-15 19:46:13.889302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.914 [2024-05-15 19:46:13.889542] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.914 [2024-05-15 19:46:13.889553] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.914 [2024-05-15 19:46:13.889561] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.914 [2024-05-15 19:46:13.893125] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.914 [2024-05-15 19:46:13.901701] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.914 [2024-05-15 19:46:13.902429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-05-15 19:46:13.902899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-05-15 19:46:13.902913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.914 [2024-05-15 19:46:13.902925] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.914 [2024-05-15 19:46:13.903178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.914 [2024-05-15 19:46:13.903418] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.914 [2024-05-15 19:46:13.903427] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.914 [2024-05-15 19:46:13.903435] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.914 [2024-05-15 19:46:13.907003] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.914 [2024-05-15 19:46:13.915576] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.914 [2024-05-15 19:46:13.916295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-05-15 19:46:13.916744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-05-15 19:46:13.916759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.914 [2024-05-15 19:46:13.916769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.914 [2024-05-15 19:46:13.917021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.914 [2024-05-15 19:46:13.917246] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.914 [2024-05-15 19:46:13.917254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.914 [2024-05-15 19:46:13.917262] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.914 [2024-05-15 19:46:13.920829] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.914 [2024-05-15 19:46:13.929406] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.914 [2024-05-15 19:46:13.930120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-05-15 19:46:13.930477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-05-15 19:46:13.930493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.914 [2024-05-15 19:46:13.930503] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.914 [2024-05-15 19:46:13.930749] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.914 [2024-05-15 19:46:13.930973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.914 [2024-05-15 19:46:13.930981] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.914 [2024-05-15 19:46:13.930989] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.914 [2024-05-15 19:46:13.934548] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.914 [2024-05-15 19:46:13.943318] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.914 [2024-05-15 19:46:13.944001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-05-15 19:46:13.944435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-05-15 19:46:13.944450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.914 [2024-05-15 19:46:13.944461] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.914 [2024-05-15 19:46:13.944705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.914 [2024-05-15 19:46:13.944930] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.914 [2024-05-15 19:46:13.944938] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.914 [2024-05-15 19:46:13.944945] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.914 [2024-05-15 19:46:13.948514] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.914 [2024-05-15 19:46:13.957284] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.914 [2024-05-15 19:46:13.957864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-05-15 19:46:13.958207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-05-15 19:46:13.958220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.914 [2024-05-15 19:46:13.958230] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.914 [2024-05-15 19:46:13.958486] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.914 [2024-05-15 19:46:13.958711] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.914 [2024-05-15 19:46:13.958720] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.914 [2024-05-15 19:46:13.958727] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.914 [2024-05-15 19:46:13.962280] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.914 [2024-05-15 19:46:13.971252] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.914 [2024-05-15 19:46:13.972009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-05-15 19:46:13.972397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-05-15 19:46:13.972412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.914 [2024-05-15 19:46:13.972422] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.914 [2024-05-15 19:46:13.972663] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.914 [2024-05-15 19:46:13.972885] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.914 [2024-05-15 19:46:13.972893] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.914 [2024-05-15 19:46:13.972901] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.914 [2024-05-15 19:46:13.976449] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.914 [2024-05-15 19:46:13.985211] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.914 [2024-05-15 19:46:13.985876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-05-15 19:46:13.986267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-05-15 19:46:13.986280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.914 [2024-05-15 19:46:13.986290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.914 [2024-05-15 19:46:13.986540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.914 [2024-05-15 19:46:13.986764] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.914 [2024-05-15 19:46:13.986772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.914 [2024-05-15 19:46:13.986779] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.914 [2024-05-15 19:46:13.990323] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.914 [2024-05-15 19:46:13.999085] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.914 [2024-05-15 19:46:13.999688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-05-15 19:46:14.000089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.914 [2024-05-15 19:46:14.000099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.914 [2024-05-15 19:46:14.000106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.914 [2024-05-15 19:46:14.000331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.914 [2024-05-15 19:46:14.000551] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.915 [2024-05-15 19:46:14.000558] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.915 [2024-05-15 19:46:14.000565] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.915 [2024-05-15 19:46:14.004147] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.915 [2024-05-15 19:46:14.012903] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.915 [2024-05-15 19:46:14.013599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-05-15 19:46:14.013983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-05-15 19:46:14.013995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.915 [2024-05-15 19:46:14.014005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.915 [2024-05-15 19:46:14.014244] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.915 [2024-05-15 19:46:14.014475] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.915 [2024-05-15 19:46:14.014484] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.915 [2024-05-15 19:46:14.014491] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.915 [2024-05-15 19:46:14.018035] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.915 [2024-05-15 19:46:14.026798] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.915 [2024-05-15 19:46:14.027582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-05-15 19:46:14.027970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-05-15 19:46:14.027982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.915 [2024-05-15 19:46:14.027992] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.915 [2024-05-15 19:46:14.028230] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.915 [2024-05-15 19:46:14.028460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.915 [2024-05-15 19:46:14.028469] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.915 [2024-05-15 19:46:14.028476] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.915 [2024-05-15 19:46:14.032022] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.915 [2024-05-15 19:46:14.040578] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.915 [2024-05-15 19:46:14.041294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-05-15 19:46:14.041595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-05-15 19:46:14.041608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.915 [2024-05-15 19:46:14.041617] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.915 [2024-05-15 19:46:14.041855] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.915 [2024-05-15 19:46:14.042077] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.915 [2024-05-15 19:46:14.042086] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.915 [2024-05-15 19:46:14.042093] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.915 [2024-05-15 19:46:14.045639] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.915 [2024-05-15 19:46:14.054409] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.915 [2024-05-15 19:46:14.055064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-05-15 19:46:14.055485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-05-15 19:46:14.055499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.915 [2024-05-15 19:46:14.055513] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.915 [2024-05-15 19:46:14.055750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.915 [2024-05-15 19:46:14.055973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.915 [2024-05-15 19:46:14.055980] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.915 [2024-05-15 19:46:14.055988] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.915 [2024-05-15 19:46:14.059536] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.915 [2024-05-15 19:46:14.068294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.915 [2024-05-15 19:46:14.069016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-05-15 19:46:14.069441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-05-15 19:46:14.069456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.915 [2024-05-15 19:46:14.069465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.915 [2024-05-15 19:46:14.069702] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.915 [2024-05-15 19:46:14.069924] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.915 [2024-05-15 19:46:14.069933] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.915 [2024-05-15 19:46:14.069940] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.915 [2024-05-15 19:46:14.073487] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.915 [2024-05-15 19:46:14.082246] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.915 [2024-05-15 19:46:14.082963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-05-15 19:46:14.083293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.915 [2024-05-15 19:46:14.083306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:47.915 [2024-05-15 19:46:14.083322] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:47.915 [2024-05-15 19:46:14.083560] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:47.915 [2024-05-15 19:46:14.083782] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.915 [2024-05-15 19:46:14.083790] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.915 [2024-05-15 19:46:14.083798] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.915 [2024-05-15 19:46:14.087342] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.915 [2024-05-15 19:46:14.096099] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.915 [2024-05-15 19:46:14.096791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.097178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.097192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.178 [2024-05-15 19:46:14.097201] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.178 [2024-05-15 19:46:14.097452] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.178 [2024-05-15 19:46:14.097675] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.178 [2024-05-15 19:46:14.097683] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.178 [2024-05-15 19:46:14.097690] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.178 [2024-05-15 19:46:14.101233] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.178 [2024-05-15 19:46:14.109996] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.178 [2024-05-15 19:46:14.110681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.111066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.111078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.178 [2024-05-15 19:46:14.111088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.178 [2024-05-15 19:46:14.111333] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.178 [2024-05-15 19:46:14.111555] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.178 [2024-05-15 19:46:14.111563] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.178 [2024-05-15 19:46:14.111570] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.178 [2024-05-15 19:46:14.115111] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.178 [2024-05-15 19:46:14.123867] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.178 [2024-05-15 19:46:14.124394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.124785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.124797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.178 [2024-05-15 19:46:14.124807] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.178 [2024-05-15 19:46:14.125045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.178 [2024-05-15 19:46:14.125267] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.178 [2024-05-15 19:46:14.125274] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.178 [2024-05-15 19:46:14.125282] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.178 [2024-05-15 19:46:14.128833] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.178 [2024-05-15 19:46:14.137820] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.178 [2024-05-15 19:46:14.138433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.138833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.138845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.178 [2024-05-15 19:46:14.138854] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.178 [2024-05-15 19:46:14.139100] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.178 [2024-05-15 19:46:14.139330] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.178 [2024-05-15 19:46:14.139339] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.178 [2024-05-15 19:46:14.139346] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.178 [2024-05-15 19:46:14.142889] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.178 [2024-05-15 19:46:14.151660] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.178 [2024-05-15 19:46:14.152372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.152722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.152735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.178 [2024-05-15 19:46:14.152744] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.178 [2024-05-15 19:46:14.152981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.178 [2024-05-15 19:46:14.153203] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.178 [2024-05-15 19:46:14.153212] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.178 [2024-05-15 19:46:14.153219] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.178 [2024-05-15 19:46:14.156766] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.178 [2024-05-15 19:46:14.165520] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.178 [2024-05-15 19:46:14.166213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.166614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.166628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.178 [2024-05-15 19:46:14.166638] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.178 [2024-05-15 19:46:14.166876] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.178 [2024-05-15 19:46:14.167098] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.178 [2024-05-15 19:46:14.167106] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.178 [2024-05-15 19:46:14.167114] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.178 [2024-05-15 19:46:14.170658] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.178 [2024-05-15 19:46:14.179417] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.178 [2024-05-15 19:46:14.179908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.180236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.180246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.178 [2024-05-15 19:46:14.180253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.178 [2024-05-15 19:46:14.180479] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.178 [2024-05-15 19:46:14.180704] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.178 [2024-05-15 19:46:14.180714] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.178 [2024-05-15 19:46:14.180722] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.178 [2024-05-15 19:46:14.184259] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.178 [2024-05-15 19:46:14.193220] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.178 [2024-05-15 19:46:14.193838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.194058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.194070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.178 [2024-05-15 19:46:14.194078] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.178 [2024-05-15 19:46:14.194299] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.178 [2024-05-15 19:46:14.194524] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.178 [2024-05-15 19:46:14.194532] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.178 [2024-05-15 19:46:14.194538] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.178 [2024-05-15 19:46:14.198070] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.178 [2024-05-15 19:46:14.207032] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.178 [2024-05-15 19:46:14.207604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.207937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.207951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.178 [2024-05-15 19:46:14.207960] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.178 [2024-05-15 19:46:14.208197] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.178 [2024-05-15 19:46:14.208425] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.178 [2024-05-15 19:46:14.208434] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.178 [2024-05-15 19:46:14.208442] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.178 [2024-05-15 19:46:14.211985] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.178 [2024-05-15 19:46:14.220967] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.178 [2024-05-15 19:46:14.221561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.221959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.221972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.178 [2024-05-15 19:46:14.221981] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.178 [2024-05-15 19:46:14.222219] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.178 [2024-05-15 19:46:14.222447] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.178 [2024-05-15 19:46:14.222456] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.178 [2024-05-15 19:46:14.222468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.178 [2024-05-15 19:46:14.226008] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.178 [2024-05-15 19:46:14.234766] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.178 [2024-05-15 19:46:14.235514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.235795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.235808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.178 [2024-05-15 19:46:14.235817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.178 [2024-05-15 19:46:14.236055] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.178 [2024-05-15 19:46:14.236277] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.178 [2024-05-15 19:46:14.236285] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.178 [2024-05-15 19:46:14.236292] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.178 [2024-05-15 19:46:14.239842] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.178 [2024-05-15 19:46:14.248629] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.178 [2024-05-15 19:46:14.249312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.249751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.249764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.178 [2024-05-15 19:46:14.249774] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.178 [2024-05-15 19:46:14.250013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.178 [2024-05-15 19:46:14.250235] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.178 [2024-05-15 19:46:14.250244] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.178 [2024-05-15 19:46:14.250251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.178 [2024-05-15 19:46:14.253802] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.178 [2024-05-15 19:46:14.262573] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.178 [2024-05-15 19:46:14.263323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.263736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.263748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.178 [2024-05-15 19:46:14.263758] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.178 [2024-05-15 19:46:14.263995] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.178 [2024-05-15 19:46:14.264217] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.178 [2024-05-15 19:46:14.264225] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.178 [2024-05-15 19:46:14.264238] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.178 [2024-05-15 19:46:14.267792] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.178 [2024-05-15 19:46:14.276384] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.178 [2024-05-15 19:46:14.277024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.277395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.277406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.178 [2024-05-15 19:46:14.277414] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.178 [2024-05-15 19:46:14.277633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.178 [2024-05-15 19:46:14.277851] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.178 [2024-05-15 19:46:14.277858] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.178 [2024-05-15 19:46:14.277865] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.178 [2024-05-15 19:46:14.281410] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.178 [2024-05-15 19:46:14.290171] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.178 [2024-05-15 19:46:14.290768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.291124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.178 [2024-05-15 19:46:14.291134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.179 [2024-05-15 19:46:14.291141] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.179 [2024-05-15 19:46:14.291365] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.179 [2024-05-15 19:46:14.291584] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.179 [2024-05-15 19:46:14.291591] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.179 [2024-05-15 19:46:14.291598] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.179 [2024-05-15 19:46:14.295135] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.179 [2024-05-15 19:46:14.304108] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.179 [2024-05-15 19:46:14.304713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.179 [2024-05-15 19:46:14.305033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.179 [2024-05-15 19:46:14.305042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.179 [2024-05-15 19:46:14.305049] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.179 [2024-05-15 19:46:14.305268] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.179 [2024-05-15 19:46:14.305492] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.179 [2024-05-15 19:46:14.305501] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.179 [2024-05-15 19:46:14.305507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.179 [2024-05-15 19:46:14.309045] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.179 [2024-05-15 19:46:14.318026] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.179 [2024-05-15 19:46:14.318758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.179 [2024-05-15 19:46:14.319149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.179 [2024-05-15 19:46:14.319162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.179 [2024-05-15 19:46:14.319171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.179 [2024-05-15 19:46:14.319416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.179 [2024-05-15 19:46:14.319639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.179 [2024-05-15 19:46:14.319648] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.179 [2024-05-15 19:46:14.319655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.179 [2024-05-15 19:46:14.323196] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.179 [2024-05-15 19:46:14.331961] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.179 [2024-05-15 19:46:14.332676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.179 [2024-05-15 19:46:14.333060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.179 [2024-05-15 19:46:14.333072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.179 [2024-05-15 19:46:14.333082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.179 [2024-05-15 19:46:14.333326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.179 [2024-05-15 19:46:14.333549] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.179 [2024-05-15 19:46:14.333557] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.179 [2024-05-15 19:46:14.333564] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.179 [2024-05-15 19:46:14.337107] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.179 [2024-05-15 19:46:14.345878] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.179 [2024-05-15 19:46:14.346582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.179 [2024-05-15 19:46:14.346965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.179 [2024-05-15 19:46:14.346978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.179 [2024-05-15 19:46:14.346987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.179 [2024-05-15 19:46:14.347224] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.179 [2024-05-15 19:46:14.347464] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.179 [2024-05-15 19:46:14.347473] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.179 [2024-05-15 19:46:14.347481] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.179 [2024-05-15 19:46:14.351021] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.179 [2024-05-15 19:46:14.359785] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.179 [2024-05-15 19:46:14.360424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.179 [2024-05-15 19:46:14.360694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.179 [2024-05-15 19:46:14.360709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.179 [2024-05-15 19:46:14.360718] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.179 [2024-05-15 19:46:14.360956] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.441 [2024-05-15 19:46:14.361178] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.441 [2024-05-15 19:46:14.361188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.441 [2024-05-15 19:46:14.361195] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.441 [2024-05-15 19:46:14.364747] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.441 [2024-05-15 19:46:14.373729] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.441 [2024-05-15 19:46:14.374420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.441 [2024-05-15 19:46:14.374777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.441 [2024-05-15 19:46:14.374789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.441 [2024-05-15 19:46:14.374798] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.441 [2024-05-15 19:46:14.375036] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.441 [2024-05-15 19:46:14.375257] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.441 [2024-05-15 19:46:14.375266] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.441 [2024-05-15 19:46:14.375274] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.441 [2024-05-15 19:46:14.378819] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.441 [2024-05-15 19:46:14.387576] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.441 [2024-05-15 19:46:14.388220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.441 [2024-05-15 19:46:14.388425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.441 [2024-05-15 19:46:14.388438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.441 [2024-05-15 19:46:14.388446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.441 [2024-05-15 19:46:14.388667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.441 [2024-05-15 19:46:14.388886] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.441 [2024-05-15 19:46:14.388894] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.441 [2024-05-15 19:46:14.388901] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.441 [2024-05-15 19:46:14.392440] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.441 [2024-05-15 19:46:14.401413] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.441 [2024-05-15 19:46:14.402043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.441 [2024-05-15 19:46:14.402444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.441 [2024-05-15 19:46:14.402455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.441 [2024-05-15 19:46:14.402462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.441 [2024-05-15 19:46:14.402680] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.441 [2024-05-15 19:46:14.402898] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.441 [2024-05-15 19:46:14.402906] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.441 [2024-05-15 19:46:14.402913] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.441 [2024-05-15 19:46:14.406454] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.441 [2024-05-15 19:46:14.415217] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.441 [2024-05-15 19:46:14.415895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.441 [2024-05-15 19:46:14.416278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.441 [2024-05-15 19:46:14.416292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.441 [2024-05-15 19:46:14.416301] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.441 [2024-05-15 19:46:14.416547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.441 [2024-05-15 19:46:14.416770] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.441 [2024-05-15 19:46:14.416778] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.441 [2024-05-15 19:46:14.416785] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.441 [2024-05-15 19:46:14.420334] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.441 [2024-05-15 19:46:14.429103] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.441 [2024-05-15 19:46:14.429791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.441 [2024-05-15 19:46:14.430176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.441 [2024-05-15 19:46:14.430188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.441 [2024-05-15 19:46:14.430197] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.441 [2024-05-15 19:46:14.430444] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.441 [2024-05-15 19:46:14.430668] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.441 [2024-05-15 19:46:14.430676] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.441 [2024-05-15 19:46:14.430683] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.441 [2024-05-15 19:46:14.434226] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.441 [2024-05-15 19:46:14.443003] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.441 [2024-05-15 19:46:14.443697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.441 [2024-05-15 19:46:14.444085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.441 [2024-05-15 19:46:14.444098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.441 [2024-05-15 19:46:14.444112] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.441 [2024-05-15 19:46:14.444359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.441 [2024-05-15 19:46:14.444581] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.441 [2024-05-15 19:46:14.444589] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.441 [2024-05-15 19:46:14.444597] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.441 [2024-05-15 19:46:14.448137] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.441 [2024-05-15 19:46:14.456947] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.441 [2024-05-15 19:46:14.457476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.441 [2024-05-15 19:46:14.457857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.441 [2024-05-15 19:46:14.457866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.441 [2024-05-15 19:46:14.457874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.441 [2024-05-15 19:46:14.458093] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.441 [2024-05-15 19:46:14.458311] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.442 [2024-05-15 19:46:14.458325] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.442 [2024-05-15 19:46:14.458332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.442 [2024-05-15 19:46:14.461868] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.442 [2024-05-15 19:46:14.470847] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.442 [2024-05-15 19:46:14.471575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.471957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.471970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.442 [2024-05-15 19:46:14.471980] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.442 [2024-05-15 19:46:14.472217] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.442 [2024-05-15 19:46:14.472447] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.442 [2024-05-15 19:46:14.472457] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.442 [2024-05-15 19:46:14.472465] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.442 [2024-05-15 19:46:14.476005] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.442 [2024-05-15 19:46:14.484774] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.442 [2024-05-15 19:46:14.485420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.485812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.485825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.442 [2024-05-15 19:46:14.485838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.442 [2024-05-15 19:46:14.486076] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.442 [2024-05-15 19:46:14.486298] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.442 [2024-05-15 19:46:14.486306] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.442 [2024-05-15 19:46:14.486321] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.442 [2024-05-15 19:46:14.489864] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.442 [2024-05-15 19:46:14.498630] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.442 [2024-05-15 19:46:14.499199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.499564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.499575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.442 [2024-05-15 19:46:14.499583] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.442 [2024-05-15 19:46:14.499802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.442 [2024-05-15 19:46:14.500020] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.442 [2024-05-15 19:46:14.500028] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.442 [2024-05-15 19:46:14.500035] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.442 [2024-05-15 19:46:14.503579] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.442 [2024-05-15 19:46:14.512555] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.442 [2024-05-15 19:46:14.513198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.513591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.513605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.442 [2024-05-15 19:46:14.513614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.442 [2024-05-15 19:46:14.513851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.442 [2024-05-15 19:46:14.514073] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.442 [2024-05-15 19:46:14.514082] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.442 [2024-05-15 19:46:14.514089] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.442 [2024-05-15 19:46:14.517641] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.442 [2024-05-15 19:46:14.526418] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.442 [2024-05-15 19:46:14.527098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.527479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.527493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.442 [2024-05-15 19:46:14.527502] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.442 [2024-05-15 19:46:14.527744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.442 [2024-05-15 19:46:14.527966] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.442 [2024-05-15 19:46:14.527975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.442 [2024-05-15 19:46:14.527982] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.442 [2024-05-15 19:46:14.531535] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.442 [2024-05-15 19:46:14.540306] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.442 [2024-05-15 19:46:14.541033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.541419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.541433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.442 [2024-05-15 19:46:14.541442] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.442 [2024-05-15 19:46:14.541680] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.442 [2024-05-15 19:46:14.541902] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.442 [2024-05-15 19:46:14.541911] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.442 [2024-05-15 19:46:14.541918] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.442 [2024-05-15 19:46:14.545471] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.442 [2024-05-15 19:46:14.554256] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.442 [2024-05-15 19:46:14.554864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.555219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.555228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.442 [2024-05-15 19:46:14.555236] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.442 [2024-05-15 19:46:14.555461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.442 [2024-05-15 19:46:14.555680] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.442 [2024-05-15 19:46:14.555688] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.442 [2024-05-15 19:46:14.555695] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.442 [2024-05-15 19:46:14.559236] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.442 [2024-05-15 19:46:14.568214] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.442 [2024-05-15 19:46:14.568830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.569132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.569142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.442 [2024-05-15 19:46:14.569149] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.442 [2024-05-15 19:46:14.569373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.442 [2024-05-15 19:46:14.569596] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.442 [2024-05-15 19:46:14.569604] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.442 [2024-05-15 19:46:14.569611] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.442 [2024-05-15 19:46:14.573147] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.442 [2024-05-15 19:46:14.582129] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.442 [2024-05-15 19:46:14.582649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.583018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.583027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.442 [2024-05-15 19:46:14.583034] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.442 [2024-05-15 19:46:14.583252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.442 [2024-05-15 19:46:14.583477] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.442 [2024-05-15 19:46:14.583485] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.442 [2024-05-15 19:46:14.583492] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.442 [2024-05-15 19:46:14.587030] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.442 [2024-05-15 19:46:14.596008] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.442 [2024-05-15 19:46:14.596689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.597078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.597091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.442 [2024-05-15 19:46:14.597100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.442 [2024-05-15 19:46:14.597343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.442 [2024-05-15 19:46:14.597565] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.442 [2024-05-15 19:46:14.597574] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.442 [2024-05-15 19:46:14.597581] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.442 [2024-05-15 19:46:14.601118] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.442 [2024-05-15 19:46:14.610065] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.442 [2024-05-15 19:46:14.610764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.611152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.442 [2024-05-15 19:46:14.611165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.442 [2024-05-15 19:46:14.611174] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.442 [2024-05-15 19:46:14.611420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.442 [2024-05-15 19:46:14.611642] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.442 [2024-05-15 19:46:14.611655] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.442 [2024-05-15 19:46:14.611662] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.442 [2024-05-15 19:46:14.615202] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.442 [2024-05-15 19:46:14.623972] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.703 [2024-05-15 19:46:14.624669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.703 [2024-05-15 19:46:14.625053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.703 [2024-05-15 19:46:14.625067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.703 [2024-05-15 19:46:14.625076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.703 [2024-05-15 19:46:14.625320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.703 [2024-05-15 19:46:14.625543] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.703 [2024-05-15 19:46:14.625551] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.703 [2024-05-15 19:46:14.625559] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.703 [2024-05-15 19:46:14.629097] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.703 [2024-05-15 19:46:14.637858] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.703 [2024-05-15 19:46:14.638564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.703 [2024-05-15 19:46:14.638953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.703 [2024-05-15 19:46:14.638966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.703 [2024-05-15 19:46:14.638975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.703 [2024-05-15 19:46:14.639213] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.703 [2024-05-15 19:46:14.639443] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.703 [2024-05-15 19:46:14.639452] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.703 [2024-05-15 19:46:14.639459] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.703 [2024-05-15 19:46:14.642999] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.704 [2024-05-15 19:46:14.651782] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.704 [2024-05-15 19:46:14.652452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.704 [2024-05-15 19:46:14.652839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.704 [2024-05-15 19:46:14.652851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.704 [2024-05-15 19:46:14.652861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.704 [2024-05-15 19:46:14.653098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.704 [2024-05-15 19:46:14.653327] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.704 [2024-05-15 19:46:14.653336] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.704 [2024-05-15 19:46:14.653348] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.704 [2024-05-15 19:46:14.656890] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.704 [2024-05-15 19:46:14.665663] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.704 [2024-05-15 19:46:14.666297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.704 [2024-05-15 19:46:14.666573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.704 [2024-05-15 19:46:14.666583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.704 [2024-05-15 19:46:14.666590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.704 [2024-05-15 19:46:14.666809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.704 [2024-05-15 19:46:14.667026] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.704 [2024-05-15 19:46:14.667034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.704 [2024-05-15 19:46:14.667040] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.704 [2024-05-15 19:46:14.670580] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.704 [2024-05-15 19:46:14.679550] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.704 [2024-05-15 19:46:14.680113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.704 [2024-05-15 19:46:14.680499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.704 [2024-05-15 19:46:14.680513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.704 [2024-05-15 19:46:14.680523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.704 [2024-05-15 19:46:14.680760] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.704 [2024-05-15 19:46:14.680982] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.704 [2024-05-15 19:46:14.680990] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.704 [2024-05-15 19:46:14.680997] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.704 [2024-05-15 19:46:14.684545] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.704 [2024-05-15 19:46:14.693518] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.704 [2024-05-15 19:46:14.694227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.704 [2024-05-15 19:46:14.694649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.704 [2024-05-15 19:46:14.694663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.704 [2024-05-15 19:46:14.694672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.704 [2024-05-15 19:46:14.694911] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.704 [2024-05-15 19:46:14.695133] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.704 [2024-05-15 19:46:14.695141] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.704 [2024-05-15 19:46:14.695149] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.704 [2024-05-15 19:46:14.698698] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.704 [2024-05-15 19:46:14.707469] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.704 [2024-05-15 19:46:14.708065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.704 [2024-05-15 19:46:14.708421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.704 [2024-05-15 19:46:14.708431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.704 [2024-05-15 19:46:14.708439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.704 [2024-05-15 19:46:14.708657] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.704 [2024-05-15 19:46:14.708876] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.704 [2024-05-15 19:46:14.708884] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.704 [2024-05-15 19:46:14.708891] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.704 [2024-05-15 19:46:14.712433] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.704 [2024-05-15 19:46:14.721399] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.704 [2024-05-15 19:46:14.722029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.704 [2024-05-15 19:46:14.722426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.704 [2024-05-15 19:46:14.722437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.704 [2024-05-15 19:46:14.722444] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.704 [2024-05-15 19:46:14.722662] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.704 [2024-05-15 19:46:14.722880] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.704 [2024-05-15 19:46:14.722887] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.704 [2024-05-15 19:46:14.722894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.704 [2024-05-15 19:46:14.726434] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.704 [2024-05-15 19:46:14.735192] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.704 [2024-05-15 19:46:14.735754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.704 [2024-05-15 19:46:14.736136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.704 [2024-05-15 19:46:14.736149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.704 [2024-05-15 19:46:14.736158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.704 [2024-05-15 19:46:14.736403] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.704 [2024-05-15 19:46:14.736625] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.704 [2024-05-15 19:46:14.736633] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.704 [2024-05-15 19:46:14.736640] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.704 [2024-05-15 19:46:14.740183] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.704 [2024-05-15 19:46:14.749163] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.704 [2024-05-15 19:46:14.749811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.704 [2024-05-15 19:46:14.750171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.704 [2024-05-15 19:46:14.750180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.704 [2024-05-15 19:46:14.750188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.704 [2024-05-15 19:46:14.750413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.704 [2024-05-15 19:46:14.750633] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.704 [2024-05-15 19:46:14.750641] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.704 [2024-05-15 19:46:14.750648] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.704 [2024-05-15 19:46:14.754179] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.704 [2024-05-15 19:46:14.763144] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.704 [2024-05-15 19:46:14.763833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.704 [2024-05-15 19:46:14.764218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.704 [2024-05-15 19:46:14.764231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.704 [2024-05-15 19:46:14.764240] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.704 [2024-05-15 19:46:14.764484] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.704 [2024-05-15 19:46:14.764707] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.704 [2024-05-15 19:46:14.764715] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.704 [2024-05-15 19:46:14.764722] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.704 [2024-05-15 19:46:14.768264] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.704 [2024-05-15 19:46:14.777036] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.704 [2024-05-15 19:46:14.777732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.704 [2024-05-15 19:46:14.778120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.704 [2024-05-15 19:46:14.778133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.704 [2024-05-15 19:46:14.778142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.704 [2024-05-15 19:46:14.778388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.705 [2024-05-15 19:46:14.778611] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.705 [2024-05-15 19:46:14.778619] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.705 [2024-05-15 19:46:14.778627] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.705 [2024-05-15 19:46:14.782168] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.705 [2024-05-15 19:46:14.790938] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.705 [2024-05-15 19:46:14.791507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.705 [2024-05-15 19:46:14.791736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.705 [2024-05-15 19:46:14.791748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.705 [2024-05-15 19:46:14.791756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.705 [2024-05-15 19:46:14.791977] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.705 [2024-05-15 19:46:14.792196] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.705 [2024-05-15 19:46:14.792204] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.705 [2024-05-15 19:46:14.792210] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.705 [2024-05-15 19:46:14.795757] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.705 [2024-05-15 19:46:14.804727] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.705 [2024-05-15 19:46:14.805340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.705 [2024-05-15 19:46:14.805725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.705 [2024-05-15 19:46:14.805738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.705 [2024-05-15 19:46:14.805748] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.705 [2024-05-15 19:46:14.805986] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.705 [2024-05-15 19:46:14.806207] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.705 [2024-05-15 19:46:14.806215] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.705 [2024-05-15 19:46:14.806223] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.705 [2024-05-15 19:46:14.809773] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.705 [2024-05-15 19:46:14.818540] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.705 [2024-05-15 19:46:14.819225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.705 [2024-05-15 19:46:14.819582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.705 [2024-05-15 19:46:14.819597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.705 [2024-05-15 19:46:14.819606] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.705 [2024-05-15 19:46:14.819844] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.705 [2024-05-15 19:46:14.820066] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.705 [2024-05-15 19:46:14.820075] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.705 [2024-05-15 19:46:14.820082] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.705 [2024-05-15 19:46:14.823631] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.705 [2024-05-15 19:46:14.832394] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.705 [2024-05-15 19:46:14.833089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.705 [2024-05-15 19:46:14.833406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.705 [2024-05-15 19:46:14.833425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.705 [2024-05-15 19:46:14.833435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.705 [2024-05-15 19:46:14.833673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.705 [2024-05-15 19:46:14.833895] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.705 [2024-05-15 19:46:14.833903] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.705 [2024-05-15 19:46:14.833910] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.705 [2024-05-15 19:46:14.837453] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.705 [2024-05-15 19:46:14.846212] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.705 [2024-05-15 19:46:14.846814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.705 [2024-05-15 19:46:14.847171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.705 [2024-05-15 19:46:14.847181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.705 [2024-05-15 19:46:14.847189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.705 [2024-05-15 19:46:14.847422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.705 [2024-05-15 19:46:14.847641] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.705 [2024-05-15 19:46:14.847648] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.705 [2024-05-15 19:46:14.847655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.705 [2024-05-15 19:46:14.851195] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.705 [2024-05-15 19:46:14.860174] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.705 [2024-05-15 19:46:14.860769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.705 [2024-05-15 19:46:14.861128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.705 [2024-05-15 19:46:14.861137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.705 [2024-05-15 19:46:14.861144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.705 [2024-05-15 19:46:14.861368] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.705 [2024-05-15 19:46:14.861587] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.705 [2024-05-15 19:46:14.861594] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.705 [2024-05-15 19:46:14.861601] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.705 [2024-05-15 19:46:14.865141] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.705 [2024-05-15 19:46:14.874119] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.705 [2024-05-15 19:46:14.874739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.705 [2024-05-15 19:46:14.875093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.705 [2024-05-15 19:46:14.875103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.705 [2024-05-15 19:46:14.875114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.705 [2024-05-15 19:46:14.875338] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.705 [2024-05-15 19:46:14.875557] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.705 [2024-05-15 19:46:14.875564] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.705 [2024-05-15 19:46:14.875570] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.705 [2024-05-15 19:46:14.879110] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.967 [2024-05-15 19:46:14.888085] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.967 [2024-05-15 19:46:14.888685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.967 [2024-05-15 19:46:14.889001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.967 [2024-05-15 19:46:14.889011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.967 [2024-05-15 19:46:14.889018] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.967 [2024-05-15 19:46:14.889236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.967 [2024-05-15 19:46:14.889460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.967 [2024-05-15 19:46:14.889468] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.967 [2024-05-15 19:46:14.889475] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.967 [2024-05-15 19:46:14.893012] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.967 [2024-05-15 19:46:14.901989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.967 [2024-05-15 19:46:14.902473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.967 [2024-05-15 19:46:14.902735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.967 [2024-05-15 19:46:14.902749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.967 [2024-05-15 19:46:14.902756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.967 [2024-05-15 19:46:14.902976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.967 [2024-05-15 19:46:14.903196] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.967 [2024-05-15 19:46:14.903203] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.967 [2024-05-15 19:46:14.903209] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.967 [2024-05-15 19:46:14.906754] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.967 [2024-05-15 19:46:14.915936] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.967 [2024-05-15 19:46:14.916526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.967 [2024-05-15 19:46:14.916922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.967 [2024-05-15 19:46:14.916932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.967 [2024-05-15 19:46:14.916939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.967 [2024-05-15 19:46:14.917162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.967 [2024-05-15 19:46:14.917385] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.967 [2024-05-15 19:46:14.917393] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.967 [2024-05-15 19:46:14.917400] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.967 [2024-05-15 19:46:14.920938] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.967 [2024-05-15 19:46:14.929903] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.967 [2024-05-15 19:46:14.930491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.967 [2024-05-15 19:46:14.930886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.967 [2024-05-15 19:46:14.930895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.967 [2024-05-15 19:46:14.930902] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.967 [2024-05-15 19:46:14.931121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.967 [2024-05-15 19:46:14.931342] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.967 [2024-05-15 19:46:14.931350] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.967 [2024-05-15 19:46:14.931357] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.967 [2024-05-15 19:46:14.934892] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.967 [2024-05-15 19:46:14.943724] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.967 [2024-05-15 19:46:14.944351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.967 [2024-05-15 19:46:14.944739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.967 [2024-05-15 19:46:14.944749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.967 [2024-05-15 19:46:14.944756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.967 [2024-05-15 19:46:14.944974] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.967 [2024-05-15 19:46:14.945193] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.967 [2024-05-15 19:46:14.945200] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.967 [2024-05-15 19:46:14.945206] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.967 [2024-05-15 19:46:14.948752] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.967 [2024-05-15 19:46:14.957507] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.967 [2024-05-15 19:46:14.958134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.967 [2024-05-15 19:46:14.958486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.967 [2024-05-15 19:46:14.958497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.967 [2024-05-15 19:46:14.958504] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.967 [2024-05-15 19:46:14.958722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.967 [2024-05-15 19:46:14.958944] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.967 [2024-05-15 19:46:14.958952] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.967 [2024-05-15 19:46:14.958958] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.967 [2024-05-15 19:46:14.962496] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.967 [2024-05-15 19:46:14.971461] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.967 [2024-05-15 19:46:14.972096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.967 [2024-05-15 19:46:14.972489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.967 [2024-05-15 19:46:14.972504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.967 [2024-05-15 19:46:14.972513] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.967 [2024-05-15 19:46:14.972751] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.967 [2024-05-15 19:46:14.972973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.967 [2024-05-15 19:46:14.972981] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.967 [2024-05-15 19:46:14.972988] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.967 [2024-05-15 19:46:14.976532] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.967 [2024-05-15 19:46:14.985286] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.967 [2024-05-15 19:46:14.985994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.967 [2024-05-15 19:46:14.986376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.967 [2024-05-15 19:46:14.986389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.967 [2024-05-15 19:46:14.986398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.967 [2024-05-15 19:46:14.986636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.967 [2024-05-15 19:46:14.986858] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.967 [2024-05-15 19:46:14.986866] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.967 [2024-05-15 19:46:14.986874] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.967 [2024-05-15 19:46:14.990418] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.967 [2024-05-15 19:46:14.999174] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.967 [2024-05-15 19:46:14.999862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.967 [2024-05-15 19:46:15.000189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.967 [2024-05-15 19:46:15.000201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.967 [2024-05-15 19:46:15.000211] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.967 [2024-05-15 19:46:15.000457] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.967 [2024-05-15 19:46:15.000679] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.967 [2024-05-15 19:46:15.000692] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.967 [2024-05-15 19:46:15.000699] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.967 [2024-05-15 19:46:15.004241] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.967 [2024-05-15 19:46:15.012995] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.968 [2024-05-15 19:46:15.013668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.968 [2024-05-15 19:46:15.013951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.968 [2024-05-15 19:46:15.013964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.968 [2024-05-15 19:46:15.013973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.968 [2024-05-15 19:46:15.014211] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.968 [2024-05-15 19:46:15.014441] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.968 [2024-05-15 19:46:15.014451] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.968 [2024-05-15 19:46:15.014458] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.968 [2024-05-15 19:46:15.017997] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.968 [2024-05-15 19:46:15.026957] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.968 [2024-05-15 19:46:15.027684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.968 [2024-05-15 19:46:15.028073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.968 [2024-05-15 19:46:15.028085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.968 [2024-05-15 19:46:15.028094] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.968 [2024-05-15 19:46:15.028340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.968 [2024-05-15 19:46:15.028563] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.968 [2024-05-15 19:46:15.028571] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.968 [2024-05-15 19:46:15.028578] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.968 [2024-05-15 19:46:15.032119] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.968 [2024-05-15 19:46:15.040873] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.968 [2024-05-15 19:46:15.041563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.968 [2024-05-15 19:46:15.041854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.968 [2024-05-15 19:46:15.041867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.968 [2024-05-15 19:46:15.041876] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.968 [2024-05-15 19:46:15.042112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.968 [2024-05-15 19:46:15.042344] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.968 [2024-05-15 19:46:15.042353] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.968 [2024-05-15 19:46:15.042364] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.968 [2024-05-15 19:46:15.045904] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.968 [2024-05-15 19:46:15.054670] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.968 [2024-05-15 19:46:15.055239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.968 [2024-05-15 19:46:15.055413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.968 [2024-05-15 19:46:15.055424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.968 [2024-05-15 19:46:15.055431] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.968 [2024-05-15 19:46:15.055650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.968 [2024-05-15 19:46:15.055869] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.968 [2024-05-15 19:46:15.055876] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.968 [2024-05-15 19:46:15.055883] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.968 [2024-05-15 19:46:15.059419] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.968 [2024-05-15 19:46:15.068587] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.968 [2024-05-15 19:46:15.069287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.968 [2024-05-15 19:46:15.069717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.968 [2024-05-15 19:46:15.069731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.968 [2024-05-15 19:46:15.069740] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.968 [2024-05-15 19:46:15.069977] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.968 [2024-05-15 19:46:15.070199] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.968 [2024-05-15 19:46:15.070207] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.968 [2024-05-15 19:46:15.070214] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.968 [2024-05-15 19:46:15.073772] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.968 [2024-05-15 19:46:15.082541] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.968 [2024-05-15 19:46:15.083289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.968 [2024-05-15 19:46:15.083788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.968 [2024-05-15 19:46:15.083802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.968 [2024-05-15 19:46:15.083811] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.968 [2024-05-15 19:46:15.084049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.968 [2024-05-15 19:46:15.084272] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.968 [2024-05-15 19:46:15.084280] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.968 [2024-05-15 19:46:15.084287] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.968 [2024-05-15 19:46:15.087838] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.968 [2024-05-15 19:46:15.096391] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.968 [2024-05-15 19:46:15.097079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.968 [2024-05-15 19:46:15.097465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.968 [2024-05-15 19:46:15.097479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.968 [2024-05-15 19:46:15.097488] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.968 [2024-05-15 19:46:15.097725] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.968 [2024-05-15 19:46:15.097948] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.968 [2024-05-15 19:46:15.097956] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.968 [2024-05-15 19:46:15.097964] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.968 [2024-05-15 19:46:15.101510] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.968 [2024-05-15 19:46:15.110261] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.968 [2024-05-15 19:46:15.110984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.968 [2024-05-15 19:46:15.111366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.968 [2024-05-15 19:46:15.111380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.968 [2024-05-15 19:46:15.111389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.968 [2024-05-15 19:46:15.111627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.968 [2024-05-15 19:46:15.111848] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.968 [2024-05-15 19:46:15.111856] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.968 [2024-05-15 19:46:15.111863] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.968 [2024-05-15 19:46:15.115408] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.968 [2024-05-15 19:46:15.124172] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.968 [2024-05-15 19:46:15.124841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.968 [2024-05-15 19:46:15.125136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.968 [2024-05-15 19:46:15.125149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.968 [2024-05-15 19:46:15.125158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.968 [2024-05-15 19:46:15.125405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.968 [2024-05-15 19:46:15.125628] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.968 [2024-05-15 19:46:15.125636] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.968 [2024-05-15 19:46:15.125643] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.968 [2024-05-15 19:46:15.129180] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.968 [2024-05-15 19:46:15.138148] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.968 [2024-05-15 19:46:15.138772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.968 [2024-05-15 19:46:15.139130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.968 [2024-05-15 19:46:15.139139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:48.968 [2024-05-15 19:46:15.139147] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:48.968 [2024-05-15 19:46:15.139371] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:48.968 [2024-05-15 19:46:15.139590] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.968 [2024-05-15 19:46:15.139598] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.969 [2024-05-15 19:46:15.139605] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.969 [2024-05-15 19:46:15.143144] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.231 [2024-05-15 19:46:15.152135] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.231 [2024-05-15 19:46:15.152637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.231 [2024-05-15 19:46:15.152995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.231 [2024-05-15 19:46:15.153004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.231 [2024-05-15 19:46:15.153012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.231 [2024-05-15 19:46:15.153231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.231 [2024-05-15 19:46:15.153456] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.231 [2024-05-15 19:46:15.153465] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.231 [2024-05-15 19:46:15.153472] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.231 [2024-05-15 19:46:15.157012] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.231 [2024-05-15 19:46:15.165987] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.231 [2024-05-15 19:46:15.166489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.231 [2024-05-15 19:46:15.166850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.231 [2024-05-15 19:46:15.166859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.231 [2024-05-15 19:46:15.166867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.231 [2024-05-15 19:46:15.167085] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.231 [2024-05-15 19:46:15.167303] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.231 [2024-05-15 19:46:15.167311] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.231 [2024-05-15 19:46:15.167322] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.231 [2024-05-15 19:46:15.170862] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.231 [2024-05-15 19:46:15.179843] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.231 [2024-05-15 19:46:15.180465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.231 [2024-05-15 19:46:15.180884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.231 [2024-05-15 19:46:15.180894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.231 [2024-05-15 19:46:15.180902] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.231 [2024-05-15 19:46:15.181121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.231 [2024-05-15 19:46:15.181345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.231 [2024-05-15 19:46:15.181354] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.231 [2024-05-15 19:46:15.181361] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.231 [2024-05-15 19:46:15.184899] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.231 [2024-05-15 19:46:15.193663] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.231 [2024-05-15 19:46:15.194298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.231 [2024-05-15 19:46:15.194664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.231 [2024-05-15 19:46:15.194675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.231 [2024-05-15 19:46:15.194682] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.231 [2024-05-15 19:46:15.194901] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.231 [2024-05-15 19:46:15.195119] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.231 [2024-05-15 19:46:15.195126] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.231 [2024-05-15 19:46:15.195132] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.231 [2024-05-15 19:46:15.198675] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.231 [2024-05-15 19:46:15.207645] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.231 [2024-05-15 19:46:15.208353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.231 [2024-05-15 19:46:15.208779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.231 [2024-05-15 19:46:15.208792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.231 [2024-05-15 19:46:15.208801] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.231 [2024-05-15 19:46:15.209039] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.231 [2024-05-15 19:46:15.209260] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.231 [2024-05-15 19:46:15.209268] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.231 [2024-05-15 19:46:15.209276] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.231 [2024-05-15 19:46:15.212825] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.231 [2024-05-15 19:46:15.221580] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.231 [2024-05-15 19:46:15.222137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.232 [2024-05-15 19:46:15.222519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.232 [2024-05-15 19:46:15.222535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.232 [2024-05-15 19:46:15.222543] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.232 [2024-05-15 19:46:15.222762] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.232 [2024-05-15 19:46:15.222980] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.232 [2024-05-15 19:46:15.222988] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.232 [2024-05-15 19:46:15.222994] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.232 [2024-05-15 19:46:15.226533] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.232 [2024-05-15 19:46:15.235496] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.232 [2024-05-15 19:46:15.236218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.232 [2024-05-15 19:46:15.236590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.232 [2024-05-15 19:46:15.236604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.232 [2024-05-15 19:46:15.236613] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.232 [2024-05-15 19:46:15.236851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.232 [2024-05-15 19:46:15.237073] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.232 [2024-05-15 19:46:15.237081] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.232 [2024-05-15 19:46:15.237088] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.232 [2024-05-15 19:46:15.240633] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.232 [2024-05-15 19:46:15.249406] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.232 [2024-05-15 19:46:15.250049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.232 [2024-05-15 19:46:15.250434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.232 [2024-05-15 19:46:15.250448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.232 [2024-05-15 19:46:15.250457] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.232 [2024-05-15 19:46:15.250695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.232 [2024-05-15 19:46:15.250917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.232 [2024-05-15 19:46:15.250925] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.232 [2024-05-15 19:46:15.250932] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.232 [2024-05-15 19:46:15.254479] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.232 [2024-05-15 19:46:15.263238] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.232 [2024-05-15 19:46:15.263852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.232 [2024-05-15 19:46:15.264252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.232 [2024-05-15 19:46:15.264261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.232 [2024-05-15 19:46:15.264274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.232 [2024-05-15 19:46:15.264498] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.232 [2024-05-15 19:46:15.264717] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.232 [2024-05-15 19:46:15.264724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.232 [2024-05-15 19:46:15.264731] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.232 [2024-05-15 19:46:15.268265] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.232 [2024-05-15 19:46:15.277018] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.232 [2024-05-15 19:46:15.277628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.232 [2024-05-15 19:46:15.278012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.232 [2024-05-15 19:46:15.278024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.232 [2024-05-15 19:46:15.278034] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.232 [2024-05-15 19:46:15.278272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.232 [2024-05-15 19:46:15.278500] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.232 [2024-05-15 19:46:15.278509] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.232 [2024-05-15 19:46:15.278517] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.232 [2024-05-15 19:46:15.282072] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.232 [2024-05-15 19:46:15.290836] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.232 [2024-05-15 19:46:15.291324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.232 [2024-05-15 19:46:15.291715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.232 [2024-05-15 19:46:15.291725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.232 [2024-05-15 19:46:15.291733] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.232 [2024-05-15 19:46:15.291952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.232 [2024-05-15 19:46:15.292170] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.232 [2024-05-15 19:46:15.292178] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.232 [2024-05-15 19:46:15.292185] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.232 [2024-05-15 19:46:15.295724] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.232 [2024-05-15 19:46:15.304682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.232 [2024-05-15 19:46:15.305276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.232 [2024-05-15 19:46:15.305686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.232 [2024-05-15 19:46:15.305696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.232 [2024-05-15 19:46:15.305704] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.232 [2024-05-15 19:46:15.305927] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.232 [2024-05-15 19:46:15.306145] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.232 [2024-05-15 19:46:15.306152] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.232 [2024-05-15 19:46:15.306158] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.232 [2024-05-15 19:46:15.309698] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.232 [2024-05-15 19:46:15.318660] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.232 [2024-05-15 19:46:15.319380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.232 [2024-05-15 19:46:15.319854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.232 [2024-05-15 19:46:15.319867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.232 [2024-05-15 19:46:15.319876] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.232 [2024-05-15 19:46:15.320114] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.232 [2024-05-15 19:46:15.320345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.232 [2024-05-15 19:46:15.320362] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.232 [2024-05-15 19:46:15.320369] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.232 [2024-05-15 19:46:15.323908] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.232 [2024-05-15 19:46:15.332460] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.232 [2024-05-15 19:46:15.333141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.232 [2024-05-15 19:46:15.333537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.232 [2024-05-15 19:46:15.333551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.232 [2024-05-15 19:46:15.333560] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.232 [2024-05-15 19:46:15.333798] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.232 [2024-05-15 19:46:15.334020] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.232 [2024-05-15 19:46:15.334028] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.232 [2024-05-15 19:46:15.334035] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.233 [2024-05-15 19:46:15.337579] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.233 [2024-05-15 19:46:15.346339] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.233 [2024-05-15 19:46:15.347065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.233 [2024-05-15 19:46:15.347452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.233 [2024-05-15 19:46:15.347466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.233 [2024-05-15 19:46:15.347476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.233 [2024-05-15 19:46:15.347713] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.233 [2024-05-15 19:46:15.347940] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.233 [2024-05-15 19:46:15.347949] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.233 [2024-05-15 19:46:15.347956] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.233 [2024-05-15 19:46:15.351513] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.233 [2024-05-15 19:46:15.360276] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.233 [2024-05-15 19:46:15.361010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.233 [2024-05-15 19:46:15.361395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.233 [2024-05-15 19:46:15.361409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.233 [2024-05-15 19:46:15.361418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.233 [2024-05-15 19:46:15.361656] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.233 [2024-05-15 19:46:15.361878] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.233 [2024-05-15 19:46:15.361886] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.233 [2024-05-15 19:46:15.361893] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.233 [2024-05-15 19:46:15.365437] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.233 [2024-05-15 19:46:15.374195] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.233 [2024-05-15 19:46:15.374805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.233 [2024-05-15 19:46:15.375157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.233 [2024-05-15 19:46:15.375167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.233 [2024-05-15 19:46:15.375175] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.233 [2024-05-15 19:46:15.375398] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.233 [2024-05-15 19:46:15.375617] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.233 [2024-05-15 19:46:15.375624] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.233 [2024-05-15 19:46:15.375631] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.233 [2024-05-15 19:46:15.379165] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.233 [2024-05-15 19:46:15.388129] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.233 [2024-05-15 19:46:15.388726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.233 [2024-05-15 19:46:15.389085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.233 [2024-05-15 19:46:15.389094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.233 [2024-05-15 19:46:15.389101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.233 [2024-05-15 19:46:15.389324] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.233 [2024-05-15 19:46:15.389542] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.233 [2024-05-15 19:46:15.389554] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.233 [2024-05-15 19:46:15.389561] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.233 [2024-05-15 19:46:15.393092] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.233 [2024-05-15 19:46:15.402053] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.233 [2024-05-15 19:46:15.402773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.233 [2024-05-15 19:46:15.403159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.233 [2024-05-15 19:46:15.403172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.233 [2024-05-15 19:46:15.403181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.233 [2024-05-15 19:46:15.403428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.233 [2024-05-15 19:46:15.403650] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.233 [2024-05-15 19:46:15.403658] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.233 [2024-05-15 19:46:15.403665] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.233 [2024-05-15 19:46:15.407205] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.495 [2024-05-15 19:46:15.415966] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.495 [2024-05-15 19:46:15.416655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.495 [2024-05-15 19:46:15.417005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.495 [2024-05-15 19:46:15.417018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.495 [2024-05-15 19:46:15.417027] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.495 [2024-05-15 19:46:15.417265] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.495 [2024-05-15 19:46:15.417494] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.495 [2024-05-15 19:46:15.417502] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.495 [2024-05-15 19:46:15.417510] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.495 [2024-05-15 19:46:15.421047] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.495 [2024-05-15 19:46:15.429808] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.495 [2024-05-15 19:46:15.430366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.495 [2024-05-15 19:46:15.430818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.495 [2024-05-15 19:46:15.430831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.495 [2024-05-15 19:46:15.430840] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.495 [2024-05-15 19:46:15.431078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.495 [2024-05-15 19:46:15.431300] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.495 [2024-05-15 19:46:15.431308] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.495 [2024-05-15 19:46:15.431331] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.495 [2024-05-15 19:46:15.434872] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.495 [2024-05-15 19:46:15.443621] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.495 [2024-05-15 19:46:15.444350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.495 [2024-05-15 19:46:15.444687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.495 [2024-05-15 19:46:15.444700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.495 [2024-05-15 19:46:15.444709] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.495 [2024-05-15 19:46:15.444947] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.495 [2024-05-15 19:46:15.445168] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.495 [2024-05-15 19:46:15.445177] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.495 [2024-05-15 19:46:15.445184] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.495 [2024-05-15 19:46:15.448731] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.495 [2024-05-15 19:46:15.457500] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.495 [2024-05-15 19:46:15.458139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.495 [2024-05-15 19:46:15.458501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.495 [2024-05-15 19:46:15.458512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.495 [2024-05-15 19:46:15.458520] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.495 [2024-05-15 19:46:15.458740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.495 [2024-05-15 19:46:15.458957] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.495 [2024-05-15 19:46:15.458965] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.495 [2024-05-15 19:46:15.458972] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.495 [2024-05-15 19:46:15.462510] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.495 [2024-05-15 19:46:15.471470] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.495 [2024-05-15 19:46:15.472019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.495 [2024-05-15 19:46:15.472398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.495 [2024-05-15 19:46:15.472409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.496 [2024-05-15 19:46:15.472417] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.496 [2024-05-15 19:46:15.472636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.496 [2024-05-15 19:46:15.472854] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.496 [2024-05-15 19:46:15.472861] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.496 [2024-05-15 19:46:15.472868] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.496 [2024-05-15 19:46:15.476411] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.496 [2024-05-15 19:46:15.485371] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.496 [2024-05-15 19:46:15.486089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.496 [2024-05-15 19:46:15.486477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.496 [2024-05-15 19:46:15.486491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.496 [2024-05-15 19:46:15.486500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.496 [2024-05-15 19:46:15.486738] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.496 [2024-05-15 19:46:15.486960] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.496 [2024-05-15 19:46:15.486968] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.496 [2024-05-15 19:46:15.486975] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.496 [2024-05-15 19:46:15.490526] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.496 [2024-05-15 19:46:15.499282] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.496 [2024-05-15 19:46:15.500048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.496 [2024-05-15 19:46:15.500430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.496 [2024-05-15 19:46:15.500444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.496 [2024-05-15 19:46:15.500453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.496 [2024-05-15 19:46:15.500690] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.496 [2024-05-15 19:46:15.500912] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.496 [2024-05-15 19:46:15.500920] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.496 [2024-05-15 19:46:15.500927] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.496 [2024-05-15 19:46:15.504472] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.496 [2024-05-15 19:46:15.513231] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.496 [2024-05-15 19:46:15.513936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.496 [2024-05-15 19:46:15.514324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.496 [2024-05-15 19:46:15.514338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.496 [2024-05-15 19:46:15.514347] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.496 [2024-05-15 19:46:15.514584] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.496 [2024-05-15 19:46:15.514806] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.496 [2024-05-15 19:46:15.514815] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.496 [2024-05-15 19:46:15.514822] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.496 [2024-05-15 19:46:15.518367] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.496 [2024-05-15 19:46:15.527125] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.496 [2024-05-15 19:46:15.527825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.496 [2024-05-15 19:46:15.528216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.496 [2024-05-15 19:46:15.528229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.496 [2024-05-15 19:46:15.528238] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.496 [2024-05-15 19:46:15.528484] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.496 [2024-05-15 19:46:15.528707] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.496 [2024-05-15 19:46:15.528714] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.496 [2024-05-15 19:46:15.528721] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.496 [2024-05-15 19:46:15.532258] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.496 [2024-05-15 19:46:15.541019] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.496 [2024-05-15 19:46:15.541716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.496 [2024-05-15 19:46:15.542122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.496 [2024-05-15 19:46:15.542135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.496 [2024-05-15 19:46:15.542144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.496 [2024-05-15 19:46:15.542390] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.496 [2024-05-15 19:46:15.542613] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.496 [2024-05-15 19:46:15.542621] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.496 [2024-05-15 19:46:15.542628] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.496 [2024-05-15 19:46:15.546169] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.496 [2024-05-15 19:46:15.554938] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.496 [2024-05-15 19:46:15.555636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.496 [2024-05-15 19:46:15.556020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.496 [2024-05-15 19:46:15.556032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.496 [2024-05-15 19:46:15.556042] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.497 [2024-05-15 19:46:15.556279] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.497 [2024-05-15 19:46:15.556509] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.497 [2024-05-15 19:46:15.556518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.497 [2024-05-15 19:46:15.556525] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.497 [2024-05-15 19:46:15.560068] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.497 [2024-05-15 19:46:15.568829] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.497 [2024-05-15 19:46:15.569415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.497 [2024-05-15 19:46:15.569821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.497 [2024-05-15 19:46:15.569834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.497 [2024-05-15 19:46:15.569843] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.497 [2024-05-15 19:46:15.570080] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.497 [2024-05-15 19:46:15.570302] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.497 [2024-05-15 19:46:15.570310] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.497 [2024-05-15 19:46:15.570326] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.497 [2024-05-15 19:46:15.573868] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.497 [2024-05-15 19:46:15.582645] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.497 [2024-05-15 19:46:15.583369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.497 [2024-05-15 19:46:15.583653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.497 [2024-05-15 19:46:15.583667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.497 [2024-05-15 19:46:15.583676] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.497 [2024-05-15 19:46:15.583914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.497 [2024-05-15 19:46:15.584137] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.497 [2024-05-15 19:46:15.584144] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.497 [2024-05-15 19:46:15.584151] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.497 [2024-05-15 19:46:15.587698] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.497 [2024-05-15 19:46:15.596462] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.497 [2024-05-15 19:46:15.597120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.497 [2024-05-15 19:46:15.597496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.497 [2024-05-15 19:46:15.597510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.497 [2024-05-15 19:46:15.597519] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.497 [2024-05-15 19:46:15.597757] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.497 [2024-05-15 19:46:15.597979] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.497 [2024-05-15 19:46:15.597987] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.497 [2024-05-15 19:46:15.597994] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.497 [2024-05-15 19:46:15.601541] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.497 [2024-05-15 19:46:15.610502] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.497 [2024-05-15 19:46:15.611204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.497 [2024-05-15 19:46:15.611603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.497 [2024-05-15 19:46:15.611622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.497 [2024-05-15 19:46:15.611631] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.497 [2024-05-15 19:46:15.611869] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.497 [2024-05-15 19:46:15.612090] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.497 [2024-05-15 19:46:15.612098] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.497 [2024-05-15 19:46:15.612105] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.497 [2024-05-15 19:46:15.615654] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.497 [2024-05-15 19:46:15.624419] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.497 [2024-05-15 19:46:15.625148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.497 [2024-05-15 19:46:15.625486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.497 [2024-05-15 19:46:15.625500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.497 [2024-05-15 19:46:15.625510] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.497 [2024-05-15 19:46:15.625748] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.497 [2024-05-15 19:46:15.625970] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.497 [2024-05-15 19:46:15.625978] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.497 [2024-05-15 19:46:15.625985] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.497 [2024-05-15 19:46:15.629529] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.497 [2024-05-15 19:46:15.638291] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.497 [2024-05-15 19:46:15.639007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.497 [2024-05-15 19:46:15.639391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.498 [2024-05-15 19:46:15.639405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.498 [2024-05-15 19:46:15.639414] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.498 [2024-05-15 19:46:15.639652] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.498 [2024-05-15 19:46:15.639874] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.498 [2024-05-15 19:46:15.639882] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.498 [2024-05-15 19:46:15.639889] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.498 [2024-05-15 19:46:15.643434] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.498 [2024-05-15 19:46:15.652201] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.498 [2024-05-15 19:46:15.652891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.498 [2024-05-15 19:46:15.653281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.498 [2024-05-15 19:46:15.653293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.498 [2024-05-15 19:46:15.653307] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.498 [2024-05-15 19:46:15.653553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.498 [2024-05-15 19:46:15.653775] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.498 [2024-05-15 19:46:15.653783] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.498 [2024-05-15 19:46:15.653791] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.498 [2024-05-15 19:46:15.657330] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.498 [2024-05-15 19:46:15.666087] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.498 [2024-05-15 19:46:15.666821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.498 [2024-05-15 19:46:15.667206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.498 [2024-05-15 19:46:15.667218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.498 [2024-05-15 19:46:15.667227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.498 [2024-05-15 19:46:15.667473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.498 [2024-05-15 19:46:15.667696] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.498 [2024-05-15 19:46:15.667704] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.498 [2024-05-15 19:46:15.667711] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.498 [2024-05-15 19:46:15.671252] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.761 [2024-05-15 19:46:15.680013] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.761 [2024-05-15 19:46:15.680697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-05-15 19:46:15.681083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-05-15 19:46:15.681096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.761 [2024-05-15 19:46:15.681105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.761 [2024-05-15 19:46:15.681349] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.761 [2024-05-15 19:46:15.681571] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.761 [2024-05-15 19:46:15.681579] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.761 [2024-05-15 19:46:15.681586] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.761 [2024-05-15 19:46:15.685129] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.761 [2024-05-15 19:46:15.693888] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.761 [2024-05-15 19:46:15.694612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-05-15 19:46:15.694994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-05-15 19:46:15.695007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.761 [2024-05-15 19:46:15.695016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.761 [2024-05-15 19:46:15.695258] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.761 [2024-05-15 19:46:15.695488] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.761 [2024-05-15 19:46:15.695497] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.761 [2024-05-15 19:46:15.695504] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.761 [2024-05-15 19:46:15.699051] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.761 [2024-05-15 19:46:15.707807] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.761 [2024-05-15 19:46:15.708416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-05-15 19:46:15.708810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-05-15 19:46:15.708822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.761 [2024-05-15 19:46:15.708832] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.761 [2024-05-15 19:46:15.709069] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.761 [2024-05-15 19:46:15.709291] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.761 [2024-05-15 19:46:15.709298] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.761 [2024-05-15 19:46:15.709306] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.761 [2024-05-15 19:46:15.712855] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.761 [2024-05-15 19:46:15.721612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.761 [2024-05-15 19:46:15.722291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-05-15 19:46:15.722583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-05-15 19:46:15.722597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.761 [2024-05-15 19:46:15.722606] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.761 [2024-05-15 19:46:15.722844] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.761 [2024-05-15 19:46:15.723065] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.761 [2024-05-15 19:46:15.723074] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.761 [2024-05-15 19:46:15.723081] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.761 [2024-05-15 19:46:15.726626] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.761 [2024-05-15 19:46:15.735591] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.761 [2024-05-15 19:46:15.736319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-05-15 19:46:15.736712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-05-15 19:46:15.736725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.761 [2024-05-15 19:46:15.736735] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.761 [2024-05-15 19:46:15.736972] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.761 [2024-05-15 19:46:15.737198] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.761 [2024-05-15 19:46:15.737206] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.761 [2024-05-15 19:46:15.737214] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.761 [2024-05-15 19:46:15.740758] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.761 [2024-05-15 19:46:15.749527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.761 [2024-05-15 19:46:15.750138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-05-15 19:46:15.750543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-05-15 19:46:15.750557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.761 [2024-05-15 19:46:15.750567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.761 [2024-05-15 19:46:15.750805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.761 [2024-05-15 19:46:15.751026] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.761 [2024-05-15 19:46:15.751034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.761 [2024-05-15 19:46:15.751041] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.761 [2024-05-15 19:46:15.754594] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.761 [2024-05-15 19:46:15.763354] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.761 [2024-05-15 19:46:15.764084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-05-15 19:46:15.764470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-05-15 19:46:15.764484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.761 [2024-05-15 19:46:15.764494] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.761 [2024-05-15 19:46:15.764731] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.761 [2024-05-15 19:46:15.764953] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.761 [2024-05-15 19:46:15.764961] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.761 [2024-05-15 19:46:15.764968] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.761 [2024-05-15 19:46:15.768512] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.761 [2024-05-15 19:46:15.777269] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.761 [2024-05-15 19:46:15.778002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-05-15 19:46:15.778384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-05-15 19:46:15.778398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.761 [2024-05-15 19:46:15.778407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.761 [2024-05-15 19:46:15.778645] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.761 [2024-05-15 19:46:15.778867] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.761 [2024-05-15 19:46:15.778879] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.761 [2024-05-15 19:46:15.778887] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.761 [2024-05-15 19:46:15.782432] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.761 [2024-05-15 19:46:15.791187] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.761 [2024-05-15 19:46:15.791742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-05-15 19:46:15.792141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-05-15 19:46:15.792153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.761 [2024-05-15 19:46:15.792163] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.761 [2024-05-15 19:46:15.792408] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.761 [2024-05-15 19:46:15.792630] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.761 [2024-05-15 19:46:15.792638] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.761 [2024-05-15 19:46:15.792646] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.761 [2024-05-15 19:46:15.796185] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.761 [2024-05-15 19:46:15.805153] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3794909 Killed "${NVMF_APP[@]}" "$@" 00:30:49.761 [2024-05-15 19:46:15.805778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-05-15 19:46:15.806169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.761 [2024-05-15 19:46:15.806182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.762 [2024-05-15 19:46:15.806191] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.762 [2024-05-15 19:46:15.806436] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.762 19:46:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:49.762 [2024-05-15 19:46:15.806660] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.762 [2024-05-15 19:46:15.806668] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.762 [2024-05-15 19:46:15.806675] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.762 19:46:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:49.762 19:46:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:49.762 19:46:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:49.762 19:46:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:49.762 [2024-05-15 19:46:15.810215] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.762 19:46:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3796532 00:30:49.762 19:46:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3796532 00:30:49.762 19:46:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:49.762 19:46:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3796532 ']' 00:30:49.762 19:46:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.762 19:46:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:49.762 19:46:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.762 19:46:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:49.762 19:46:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:49.762 [2024-05-15 19:46:15.818973] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.762 [2024-05-15 19:46:15.819724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-05-15 19:46:15.820124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-05-15 19:46:15.820138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.762 [2024-05-15 19:46:15.820147] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.762 [2024-05-15 19:46:15.820390] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.762 [2024-05-15 19:46:15.820614] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.762 [2024-05-15 19:46:15.820622] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.762 [2024-05-15 19:46:15.820630] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.762 [2024-05-15 19:46:15.824175] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.762 [2024-05-15 19:46:15.832936] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.762 [2024-05-15 19:46:15.833557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-05-15 19:46:15.833876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-05-15 19:46:15.833890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.762 [2024-05-15 19:46:15.833900] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.762 [2024-05-15 19:46:15.834138] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.762 [2024-05-15 19:46:15.834367] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.762 [2024-05-15 19:46:15.834376] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.762 [2024-05-15 19:46:15.834384] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.762 [2024-05-15 19:46:15.837926] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.762 [2024-05-15 19:46:15.846896] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.762 [2024-05-15 19:46:15.847545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-05-15 19:46:15.847930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-05-15 19:46:15.847943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.762 [2024-05-15 19:46:15.847952] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.762 [2024-05-15 19:46:15.848190] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.762 [2024-05-15 19:46:15.848419] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.762 [2024-05-15 19:46:15.848432] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.762 [2024-05-15 19:46:15.848440] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.762 [2024-05-15 19:46:15.851994] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.762 [2024-05-15 19:46:15.860758] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.762 [2024-05-15 19:46:15.861416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-05-15 19:46:15.861869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-05-15 19:46:15.861882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.762 [2024-05-15 19:46:15.861891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.762 [2024-05-15 19:46:15.862129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.762 [2024-05-15 19:46:15.862359] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.762 [2024-05-15 19:46:15.862368] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.762 [2024-05-15 19:46:15.862375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.762 [2024-05-15 19:46:15.864694] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:30:49.762 [2024-05-15 19:46:15.864740] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.762 [2024-05-15 19:46:15.865914] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.762 [2024-05-15 19:46:15.874677] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.762 [2024-05-15 19:46:15.875121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-05-15 19:46:15.875322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-05-15 19:46:15.875337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.762 [2024-05-15 19:46:15.875345] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.762 [2024-05-15 19:46:15.875566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.762 [2024-05-15 19:46:15.875787] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.762 [2024-05-15 19:46:15.875795] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.762 [2024-05-15 19:46:15.875802] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.762 [2024-05-15 19:46:15.879342] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.762 [2024-05-15 19:46:15.888518] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.762 [2024-05-15 19:46:15.889222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-05-15 19:46:15.889487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-05-15 19:46:15.889503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.762 [2024-05-15 19:46:15.889513] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.762 [2024-05-15 19:46:15.889751] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.762 [2024-05-15 19:46:15.889979] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.762 [2024-05-15 19:46:15.889988] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.762 [2024-05-15 19:46:15.889995] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.762 [2024-05-15 19:46:15.893539] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.762 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.762 [2024-05-15 19:46:15.902301] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.762 [2024-05-15 19:46:15.902911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-05-15 19:46:15.903293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-05-15 19:46:15.903304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.762 [2024-05-15 19:46:15.903312] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.762 [2024-05-15 19:46:15.903538] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.762 [2024-05-15 19:46:15.903757] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.762 [2024-05-15 19:46:15.903766] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.762 [2024-05-15 19:46:15.903773] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.762 [2024-05-15 19:46:15.907317] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.762 [2024-05-15 19:46:15.916283] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.762 [2024-05-15 19:46:15.917015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-05-15 19:46:15.917352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-05-15 19:46:15.917367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.762 [2024-05-15 19:46:15.917377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.762 [2024-05-15 19:46:15.917615] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.762 [2024-05-15 19:46:15.917837] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.762 [2024-05-15 19:46:15.917846] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.762 [2024-05-15 19:46:15.917854] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.762 [2024-05-15 19:46:15.921398] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.762 [2024-05-15 19:46:15.930151] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.762 [2024-05-15 19:46:15.930758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-05-15 19:46:15.931049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.762 [2024-05-15 19:46:15.931060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:49.762 [2024-05-15 19:46:15.931068] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:49.762 [2024-05-15 19:46:15.931287] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:49.762 [2024-05-15 19:46:15.931514] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.762 [2024-05-15 19:46:15.931529] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.762 [2024-05-15 19:46:15.931537] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.762 [2024-05-15 19:46:15.935071] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.762 [2024-05-15 19:46:15.936199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:50.025 [2024-05-15 19:46:15.944043] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.025 [2024-05-15 19:46:15.944719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.025 [2024-05-15 19:46:15.945179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.025 [2024-05-15 19:46:15.945192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:50.025 [2024-05-15 19:46:15.945202] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:50.025 [2024-05-15 19:46:15.945448] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:50.025 [2024-05-15 19:46:15.945672] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.025 [2024-05-15 19:46:15.945681] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.025 [2024-05-15 19:46:15.945689] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.025 [2024-05-15 19:46:15.949230] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.025 [2024-05-15 19:46:15.958008] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.025 [2024-05-15 19:46:15.958689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.025 [2024-05-15 19:46:15.958976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.025 [2024-05-15 19:46:15.958990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:50.025 [2024-05-15 19:46:15.959000] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:50.025 [2024-05-15 19:46:15.959237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:50.025 [2024-05-15 19:46:15.959466] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.025 [2024-05-15 19:46:15.959475] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.025 [2024-05-15 19:46:15.959483] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.025 [2024-05-15 19:46:15.963027] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.025 [2024-05-15 19:46:15.971797] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.025 [2024-05-15 19:46:15.972595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.025 [2024-05-15 19:46:15.972994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.025 [2024-05-15 19:46:15.973008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:50.026 [2024-05-15 19:46:15.973018] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:50.026 [2024-05-15 19:46:15.973256] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:50.026 [2024-05-15 19:46:15.973487] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.026 [2024-05-15 19:46:15.973503] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.026 [2024-05-15 19:46:15.973511] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.026 [2024-05-15 19:46:15.977054] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.026 [2024-05-15 19:46:15.985657] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.026 [2024-05-15 19:46:15.986373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.026 [2024-05-15 19:46:15.986826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.026 [2024-05-15 19:46:15.986840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:50.026 [2024-05-15 19:46:15.986849] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:50.026 [2024-05-15 19:46:15.987087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:50.026 [2024-05-15 19:46:15.987310] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.026 [2024-05-15 19:46:15.987329] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.026 [2024-05-15 19:46:15.987336] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.026 [2024-05-15 19:46:15.990880] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.026 [2024-05-15 19:46:15.999540] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.026 [2024-05-15 19:46:15.999857] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.026 [2024-05-15 19:46:15.999884] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.026 [2024-05-15 19:46:15.999892] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.026 [2024-05-15 19:46:15.999899] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.026 [2024-05-15 19:46:15.999904] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.026 [2024-05-15 19:46:16.000047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.026 [2024-05-15 19:46:16.000188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.026 [2024-05-15 19:46:16.000265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.026 [2024-05-15 19:46:16.000188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:50.026 [2024-05-15 19:46:16.000685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.026 [2024-05-15 19:46:16.000701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:50.026 [2024-05-15 19:46:16.000711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:50.026 [2024-05-15 19:46:16.000949] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:50.026 [2024-05-15 19:46:16.001172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.026 [2024-05-15 19:46:16.001181] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.026 [2024-05-15 19:46:16.001189] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.026 [2024-05-15 19:46:16.004740] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.026 [2024-05-15 19:46:16.013517] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.026 [2024-05-15 19:46:16.014196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.026 [2024-05-15 19:46:16.014533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.026 [2024-05-15 19:46:16.014549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:50.026 [2024-05-15 19:46:16.014559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:50.026 [2024-05-15 19:46:16.014800] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:50.026 [2024-05-15 19:46:16.015024] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.026 [2024-05-15 19:46:16.015033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.026 [2024-05-15 19:46:16.015040] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.026 [2024-05-15 19:46:16.018587] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.026 [2024-05-15 19:46:16.027371] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.026 [2024-05-15 19:46:16.028132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.026 [2024-05-15 19:46:16.028574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.026 [2024-05-15 19:46:16.028590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:50.026 [2024-05-15 19:46:16.028600] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:50.026 [2024-05-15 19:46:16.028840] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:50.026 [2024-05-15 19:46:16.029063] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.026 [2024-05-15 19:46:16.029072] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.026 [2024-05-15 19:46:16.029080] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.026 [2024-05-15 19:46:16.032642] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.026 [2024-05-15 19:46:16.041191] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.026 [2024-05-15 19:46:16.041801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.026 [2024-05-15 19:46:16.042203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.026 [2024-05-15 19:46:16.042216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:50.026 [2024-05-15 19:46:16.042227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:50.026 [2024-05-15 19:46:16.042472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:50.026 [2024-05-15 19:46:16.042696] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.026 [2024-05-15 19:46:16.042705] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.026 [2024-05-15 19:46:16.042713] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.026 [2024-05-15 19:46:16.046254] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.026 [2024-05-15 19:46:16.055034] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.026 [2024-05-15 19:46:16.055511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.026 [2024-05-15 19:46:16.055887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.026 [2024-05-15 19:46:16.055903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:50.026 [2024-05-15 19:46:16.055911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:50.026 [2024-05-15 19:46:16.056131] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:50.026 [2024-05-15 19:46:16.056356] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.026 [2024-05-15 19:46:16.056366] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.026 [2024-05-15 19:46:16.056373] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.026 [2024-05-15 19:46:16.059910] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.026 [2024-05-15 19:46:16.068883] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.026 [2024-05-15 19:46:16.069584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.026 [2024-05-15 19:46:16.070030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.026 [2024-05-15 19:46:16.070044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:50.026 [2024-05-15 19:46:16.070054] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:50.026 [2024-05-15 19:46:16.070292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:50.026 [2024-05-15 19:46:16.070523] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.026 [2024-05-15 19:46:16.070537] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.026 [2024-05-15 19:46:16.070545] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.026 [2024-05-15 19:46:16.074088] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.026 [2024-05-15 19:46:16.082853] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.026 [2024-05-15 19:46:16.083449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.026 [2024-05-15 19:46:16.083665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.026 [2024-05-15 19:46:16.083678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:50.026 [2024-05-15 19:46:16.083687] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:50.026 [2024-05-15 19:46:16.083909] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:50.026 [2024-05-15 19:46:16.084129] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.026 [2024-05-15 19:46:16.084138] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.026 [2024-05-15 19:46:16.084145] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.026 [2024-05-15 19:46:16.087691] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.026 19:46:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:50.026 19:46:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:30:50.026 19:46:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:50.026 19:46:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:50.026 19:46:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.027 [2024-05-15 19:46:16.096659] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.027 [2024-05-15 19:46:16.097298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.027 [2024-05-15 19:46:16.097691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.027 [2024-05-15 19:46:16.097702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:50.027 [2024-05-15 19:46:16.097710] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:50.027 [2024-05-15 19:46:16.097929] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:50.027 [2024-05-15 19:46:16.098149] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.027 [2024-05-15 19:46:16.098158] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.027 [2024-05-15 19:46:16.098165] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.027 [2024-05-15 19:46:16.101706] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.027 [2024-05-15 19:46:16.110466] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.027 [2024-05-15 19:46:16.110867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.027 [2024-05-15 19:46:16.111249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.027 [2024-05-15 19:46:16.111260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:50.027 [2024-05-15 19:46:16.111268] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:50.027 [2024-05-15 19:46:16.111493] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:50.027 [2024-05-15 19:46:16.111713] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.027 [2024-05-15 19:46:16.111722] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.027 [2024-05-15 19:46:16.111729] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.027 [2024-05-15 19:46:16.115268] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.027 [2024-05-15 19:46:16.124449] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.027 [2024-05-15 19:46:16.125039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.027 [2024-05-15 19:46:16.125448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.027 [2024-05-15 19:46:16.125464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:50.027 [2024-05-15 19:46:16.125474] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:50.027 [2024-05-15 19:46:16.125712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:50.027 [2024-05-15 19:46:16.125935] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.027 [2024-05-15 19:46:16.125945] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.027 [2024-05-15 19:46:16.125952] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.027 19:46:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:50.027 19:46:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:50.027 19:46:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.027 19:46:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.027 [2024-05-15 19:46:16.129500] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.027 [2024-05-15 19:46:16.133754] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.027 [2024-05-15 19:46:16.138257] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.027 [2024-05-15 19:46:16.138964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.027 19:46:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.027 [2024-05-15 19:46:16.139398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.027 [2024-05-15 19:46:16.139413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:50.027 [2024-05-15 19:46:16.139423] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:50.027 19:46:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:50.027 [2024-05-15 19:46:16.139662] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:50.027 19:46:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.027 [2024-05-15 19:46:16.139886] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.027 [2024-05-15 19:46:16.139895] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.027 [2024-05-15 19:46:16.139903] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.027 19:46:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.027 [2024-05-15 19:46:16.143450] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.027 [2024-05-15 19:46:16.152220] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.027 [2024-05-15 19:46:16.152918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.027 [2024-05-15 19:46:16.153263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.027 [2024-05-15 19:46:16.153277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:50.027 [2024-05-15 19:46:16.153286] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:50.027 [2024-05-15 19:46:16.153532] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:50.027 [2024-05-15 19:46:16.153756] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.027 [2024-05-15 19:46:16.153766] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.027 [2024-05-15 19:46:16.153773] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.027 [2024-05-15 19:46:16.157318] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.027 [2024-05-15 19:46:16.166081] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.027 [2024-05-15 19:46:16.166684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.027 [2024-05-15 19:46:16.166943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.027 [2024-05-15 19:46:16.166959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:50.027 [2024-05-15 19:46:16.166969] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:50.027 [2024-05-15 19:46:16.167208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:50.027 [2024-05-15 19:46:16.167447] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.027 [2024-05-15 19:46:16.167456] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.027 [2024-05-15 19:46:16.167464] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.027 Malloc0 00:30:50.027 19:46:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.027 19:46:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:50.027 19:46:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.027 19:46:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.027 [2024-05-15 19:46:16.171006] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.027 [2024-05-15 19:46:16.179978] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.027 19:46:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.027 [2024-05-15 19:46:16.180485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.027 19:46:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:50.027 [2024-05-15 19:46:16.180791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.027 [2024-05-15 19:46:16.180802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:50.027 [2024-05-15 19:46:16.180810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:50.027 [2024-05-15 19:46:16.181030] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:50.027 19:46:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.027 [2024-05-15 19:46:16.181248] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.027 19:46:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.027 [2024-05-15 19:46:16.181257] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.027 [2024-05-15 19:46:16.181267] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.027 [2024-05-15 19:46:16.184804] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.027 19:46:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.027 19:46:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:50.027 19:46:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.027 19:46:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.027 [2024-05-15 19:46:16.193770] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.027 [2024-05-15 19:46:16.194366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.027 [2024-05-15 19:46:16.194748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.027 [2024-05-15 19:46:16.194759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17994c0 with addr=10.0.0.2, port=4420 00:30:50.027 [2024-05-15 19:46:16.194767] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17994c0 is same with the state(5) to be set 00:30:50.027 [2024-05-15 19:46:16.194990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17994c0 (9): Bad file descriptor 00:30:50.027 [2024-05-15 19:46:16.195211] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.027 [2024-05-15 19:46:16.195220] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.027 [2024-05-15 19:46:16.195231] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.027 [2024-05-15 19:46:16.198774] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.028 [2024-05-15 19:46:16.199453] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:50.028 [2024-05-15 19:46:16.199671] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:50.028 19:46:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.028 19:46:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3795513 00:30:50.028 [2024-05-15 19:46:16.207744] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.288 [2024-05-15 19:46:16.377219] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:00.290 00:31:00.290 Latency(us) 00:31:00.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:00.290 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:00.290 Verification LBA range: start 0x0 length 0x4000 00:31:00.290 Nvme1n1 : 15.05 6869.43 26.83 8422.10 0.00 8322.28 1064.96 44564.48 00:31:00.290 =================================================================================================================== 00:31:00.290 Total : 6869.43 26.83 8422.10 0.00 8322.28 1064.96 44564.48 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:00.290 rmmod nvme_tcp 00:31:00.290 rmmod nvme_fabrics 00:31:00.290 rmmod nvme_keyring 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3796532 ']' 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3796532 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 3796532 ']' 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 3796532 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3796532 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3796532' 00:31:00.290 killing process with pid 3796532 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 3796532 00:31:00.290 [2024-05-15 19:46:25.611408] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 3796532 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:00.290 19:46:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.672 19:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:01.672 00:31:01.672 real 0m28.871s 00:31:01.672 user 1m3.745s 00:31:01.672 sys 0m7.847s 00:31:01.672 19:46:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:01.672 19:46:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:01.672 ************************************ 00:31:01.672 END TEST nvmf_bdevperf 00:31:01.672 ************************************ 00:31:01.933 19:46:27 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:01.933 19:46:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:01.933 19:46:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:01.933 19:46:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:01.933 ************************************ 00:31:01.933 START TEST nvmf_target_disconnect 00:31:01.933 ************************************ 00:31:01.933 19:46:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:01.933 * Looking for test storage... 00:31:01.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:01.933 19:46:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:01.933 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:31:01.933 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:01.933 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:01.933 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:01.933 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:01.933 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:01.933 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:01.933 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:01.933 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:01.933 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:31:01.934 19:46:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:10.142 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:10.142 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:31:10.142 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:10.142 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:10.142 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:10.142 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:10.142 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:10.142 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:31:10.142 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:10.142 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:31:10.142 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:31:10.142 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:31:10.142 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:31:10.142 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:10.143 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:10.143 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:10.143 Found net devices under 0000:31:00.0: cvl_0_0 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:10.143 Found net devices under 0000:31:00.1: cvl_0_1 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:10.143 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:10.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:10.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:31:10.405 00:31:10.405 --- 10.0.0.2 ping statistics --- 00:31:10.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.405 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:10.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:10.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:31:10.405 00:31:10.405 --- 10.0.0.1 ping statistics --- 00:31:10.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.405 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:10.405 ************************************ 00:31:10.405 START TEST nvmf_target_disconnect_tc1 00:31:10.405 ************************************ 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:31:10.405 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:10.667 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.667 [2024-05-15 19:46:36.683892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.667 [2024-05-15 19:46:36.684448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.667 [2024-05-15 19:46:36.684474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f4520 with addr=10.0.0.2, port=4420 00:31:10.667 [2024-05-15 19:46:36.684519] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:10.667 [2024-05-15 19:46:36.684538] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:10.667 [2024-05-15 19:46:36.684546] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:31:10.667 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:10.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:10.667 Initializing NVMe Controllers 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:10.667 00:31:10.667 real 0m0.140s 00:31:10.667 user 0m0.055s 00:31:10.667 sys 0m0.084s 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:10.667 ************************************ 00:31:10.667 END TEST nvmf_target_disconnect_tc1 00:31:10.667 ************************************ 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:10.667 ************************************ 00:31:10.667 START TEST nvmf_target_disconnect_tc2 00:31:10.667 ************************************ 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3803246 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3803246 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3803246 ']' 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:10.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:10.667 19:46:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:10.667 [2024-05-15 19:46:36.836128] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:31:10.667 [2024-05-15 19:46:36.836176] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:10.929 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.929 [2024-05-15 19:46:36.931356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:10.929 [2024-05-15 19:46:37.025828] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:10.929 [2024-05-15 19:46:37.025893] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:10.929 [2024-05-15 19:46:37.025901] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:10.929 [2024-05-15 19:46:37.025908] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:10.929 [2024-05-15 19:46:37.025914] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:10.929 [2024-05-15 19:46:37.026119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:10.929 [2024-05-15 19:46:37.026278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:10.929 [2024-05-15 19:46:37.026440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:10.929 [2024-05-15 19:46:37.026441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:11.876 Malloc0 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:11.876 [2024-05-15 19:46:37.808928] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:11.876 [2024-05-15 19:46:37.848989] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:11.876 [2024-05-15 19:46:37.849360] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3803363 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:31:11.876 19:46:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:11.876 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.797 19:46:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3803246 00:31:13.797 19:46:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Write completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Write completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Write completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Write completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Write completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Write completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Write completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Write completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Write completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Write completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 [2024-05-15 19:46:39.883415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Write completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Write completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Write completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Write completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Write completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Write completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Write completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Write completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Write completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Write completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 Read completed with error (sct=0, sc=8) 00:31:13.797 starting I/O failed 00:31:13.797 [2024-05-15 19:46:39.883592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.797 [2024-05-15 19:46:39.884001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.884368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.884379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.797 qpair failed and we were unable to recover it. 00:31:13.797 [2024-05-15 19:46:39.884453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.884734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.884742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.797 qpair failed and we were unable to recover it. 00:31:13.797 [2024-05-15 19:46:39.885115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.885354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.885363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.797 qpair failed and we were unable to recover it. 00:31:13.797 [2024-05-15 19:46:39.885578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.885945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.885955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.797 qpair failed and we were unable to recover it. 00:31:13.797 [2024-05-15 19:46:39.886375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.886543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.886550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.797 qpair failed and we were unable to recover it. 00:31:13.797 [2024-05-15 19:46:39.886865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.887241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.887250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.797 qpair failed and we were unable to recover it. 00:31:13.797 [2024-05-15 19:46:39.887628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.887839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.887849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.797 qpair failed and we were unable to recover it. 00:31:13.797 [2024-05-15 19:46:39.888212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.888660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.888668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.797 qpair failed and we were unable to recover it. 00:31:13.797 [2024-05-15 19:46:39.889079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.889386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.889395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.797 qpair failed and we were unable to recover it. 00:31:13.797 [2024-05-15 19:46:39.889838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.890251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.890260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.797 qpair failed and we were unable to recover it. 00:31:13.797 [2024-05-15 19:46:39.890630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.891024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.797 [2024-05-15 19:46:39.891033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.891229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.891639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.891648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.892045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.892453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.892462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.892810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.893048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.893057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.893422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.893655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.893664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.894053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.894452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.894461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.894823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.895228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.895238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.895636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.895969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.895978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.896368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.896731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.896740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.897131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.897435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.897442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.897816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.898201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.898208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.898589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.898896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.898904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.899122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.899369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.899376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.899760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.899980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.899989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.900356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.900733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.900741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.901128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.901467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.901475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.901841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.902043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.902051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.902409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.902740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.902748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.903099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.903442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.903450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.903836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.904126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.904135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.904388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.904599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.904606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.904865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.905214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.905222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.905589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.905993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.906001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.906391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.906774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.906782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.907172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.907527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.907535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.907856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.908234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.908242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.908591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.908977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.908985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.909326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.909720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.909728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.910135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.910526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.910534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.910921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.911235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.798 [2024-05-15 19:46:39.911242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.798 qpair failed and we were unable to recover it. 00:31:13.798 [2024-05-15 19:46:39.911586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.911966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.911973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.912358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.912685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.912693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.913047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.913411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.913419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.913593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.913978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.913986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.914365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.914783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.914791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.915176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.915508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.915517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.915915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.916322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.916330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.916758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.917150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.917159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.917609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.917918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.917925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.918247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.918626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.918634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.918968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.919365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.919373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.919719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.920010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.920018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.920365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.920557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.920565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.920904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.921265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.921273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.921500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.921882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.921889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.922280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.922619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.922627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.923027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.923424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.923432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.923797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.924040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.924049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.924427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.924799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.924808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.925241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.925595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.925603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.925951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.926183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.926190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.926560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.926959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.926967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.927323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.927705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.927713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.927900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.928237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.928245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.928618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.928932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.928939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.929338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.929585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.929593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.929982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.930322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.930330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.930679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.931069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.931076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.931353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.931523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.931531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.799 [2024-05-15 19:46:39.931852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.932218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.799 [2024-05-15 19:46:39.932226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.799 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.932576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.932972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.932980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.933217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.933444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.933452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.933830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.934181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.934190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.934561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.934917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.934925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.935257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.935312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.935323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.935659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.936012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.936020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.936403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.936791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.936798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.937148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.937345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.937354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.937693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.938029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.938038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.938407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.938765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.938774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.939127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.939523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.939531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.939888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.940254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.940261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.940628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.941028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.941035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.941389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.941785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.941793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.942190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.942582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.942590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.942977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.943184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.943192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.943557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.943954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.943962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.944298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.944718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.944728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.945053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.945528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.945557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.945959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.946207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.946217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.946587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.946942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.946950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.947300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.947669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.947677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.948032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.948431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.948439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.948840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.949239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.949247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.949616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.950018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.950026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.950405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.950801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.950809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.951176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.952014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.952032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.952411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.953298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.953328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.953694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.954418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.954435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.800 qpair failed and we were unable to recover it. 00:31:13.800 [2024-05-15 19:46:39.954805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.800 [2024-05-15 19:46:39.955155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.955163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.955550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.955789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.955797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.956165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.956553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.956562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.956919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.957320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.957328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.957687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.958093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.958100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.958466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.958851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.958859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.959222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.959576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.959585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.959973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.960370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.960378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.960783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.961180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.961189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.961576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.961959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.961967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.962351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.962719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.962726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.963091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.963488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.963496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.963863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.964263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.964272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.964623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.964973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.964981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.965353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.965715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.965723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.966098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.966494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.966502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.966888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.967241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.967248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.967611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.967971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.967979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.968349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.968560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.968569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.968933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.969173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.969182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.969505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.969910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.969919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.970286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.970819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.970827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.971216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.971421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.971430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.971770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.972171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.801 [2024-05-15 19:46:39.972179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.801 qpair failed and we were unable to recover it. 00:31:13.801 [2024-05-15 19:46:39.972371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.802 [2024-05-15 19:46:39.972724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.802 [2024-05-15 19:46:39.972732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.802 qpair failed and we were unable to recover it. 00:31:13.802 [2024-05-15 19:46:39.973116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.802 [2024-05-15 19:46:39.973713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.802 [2024-05-15 19:46:39.973730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.802 qpair failed and we were unable to recover it. 00:31:13.802 [2024-05-15 19:46:39.973955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.802 [2024-05-15 19:46:39.974349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.802 [2024-05-15 19:46:39.974357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.802 qpair failed and we were unable to recover it. 00:31:13.802 [2024-05-15 19:46:39.974728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.802 [2024-05-15 19:46:39.975125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.802 [2024-05-15 19:46:39.975132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.802 qpair failed and we were unable to recover it. 00:31:13.802 [2024-05-15 19:46:39.975499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.802 [2024-05-15 19:46:39.975854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.802 [2024-05-15 19:46:39.975862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.802 qpair failed and we were unable to recover it. 00:31:13.802 [2024-05-15 19:46:39.976217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.802 [2024-05-15 19:46:39.976575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.802 [2024-05-15 19:46:39.976583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.802 qpair failed and we were unable to recover it. 00:31:13.802 [2024-05-15 19:46:39.976779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.802 [2024-05-15 19:46:39.977027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.802 [2024-05-15 19:46:39.977035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.802 qpair failed and we were unable to recover it. 00:31:13.802 [2024-05-15 19:46:39.977392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.802 [2024-05-15 19:46:39.977788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.802 [2024-05-15 19:46:39.977795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:13.802 qpair failed and we were unable to recover it. 00:31:13.802 [2024-05-15 19:46:39.977986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.978337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.978346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-05-15 19:46:39.978718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.979118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.979126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-05-15 19:46:39.979499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.979849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.979857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-05-15 19:46:39.980243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.980600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.980608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-05-15 19:46:39.980997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.981396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.981405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-05-15 19:46:39.981721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.982123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.982132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-05-15 19:46:39.982513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.982891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.982899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-05-15 19:46:39.983285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.983655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.983663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-05-15 19:46:39.984031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.984429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.984438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-05-15 19:46:39.984807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.985215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.985223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-05-15 19:46:39.985597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.986002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.986009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-05-15 19:46:39.986456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.986750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.986757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-05-15 19:46:39.987145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.987510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.987518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-05-15 19:46:39.987911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.988242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.988250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-05-15 19:46:39.988604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.988806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.988813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-05-15 19:46:39.989167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.989524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.989532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-05-15 19:46:39.989922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.990321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.990330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-05-15 19:46:39.990569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.990921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.990928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-05-15 19:46:39.991320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.991672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.991680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-05-15 19:46:39.992071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.992477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.992485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-05-15 19:46:39.992850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.993259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-05-15 19:46:39.993266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-05-15 19:46:39.993637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:39.993997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:39.994005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:39.994386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:39.994785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:39.994793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:39.995149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:39.995458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:39.995467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:39.995862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:39.996218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:39.996225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:39.996501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:39.996903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:39.996911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:39.997283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:39.997683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:39.997691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:39.998087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:39.998488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:39.998496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:39.998868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:39.999267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:39.999275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:39.999644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:39.999881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:39.999889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.000261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.000660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.000669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.001036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.001485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.001493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.002326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.002714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.002722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.002887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.003267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.003274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.003674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.004050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.004058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.004338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.004623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.004631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.004836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.005193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.005201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.005496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.005891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.005898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.006276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.006513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.006521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.006872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.007285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.007293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.007677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.008044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.008052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.008320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.008697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.008706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.009096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.009490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.009498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.009876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.010235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.010243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.010629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.011018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.011026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.011297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.011678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.011686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.012105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.012582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.012611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.012863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.013236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.013244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.013525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.013699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.013707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.014078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.014292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.014319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.014617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.015012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.015020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.015399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.015729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.015737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.016119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.016370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.016378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.016732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.017112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.017120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.017530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.017942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.017949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.018192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.018569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.018577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.018952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.019326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.019333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.019719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.020117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.020126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.020503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.020857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.020866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.021241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.021585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.021594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-05-15 19:46:40.021823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.022178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-05-15 19:46:40.022187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.022442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.022793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.022800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.023065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.023383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.023391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.023798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.024107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.024115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.024487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.024849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.024857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.025176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.025513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.025520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.025795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.026203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.026210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.026608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.027012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.027020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.027336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.027726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.027733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.028120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.028405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.028413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.028794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.029070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.029078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.029451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.029855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.029863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.030327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.030584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.030592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.030825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.031180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.031188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.031578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.031883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.031891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.032273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.032675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.032683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.033051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.033404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.033412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.033748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.034147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.034155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.034348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.034703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.034711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.035077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.035480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.035488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.035838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.036079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.036087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.036482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.036833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.036841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.037217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.037489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.037497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.037743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.038007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.038015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.038403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.038817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.038824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.039191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.040352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.040374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.040553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.040814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.040822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.041076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.041434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.041442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.041774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.041987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.041995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.042363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.042738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.042745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.043138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.043414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.043423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.043656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.044059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.044067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.044429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.044764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.044771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.045130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.045530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.045539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.045908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.046310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.046325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.046681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.047097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.047106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.047508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.047719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.047728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.048108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.048349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.048357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-05-15 19:46:40.048686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.049000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-05-15 19:46:40.049008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.049411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.049652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.049660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.050032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.050430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.050438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.050679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.051037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.051045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.051275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.051533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.051541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.051817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.052160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.052168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.052543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.052946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.052954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.053336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.053681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.053689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.053974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.054359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.054367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.054748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.054980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.054989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.055260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.055622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.055630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.055888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.056148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.056157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.056518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.056839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.056847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.057228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.057595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.057602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.057864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.058272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.058280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.058664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.058917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.058924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.059216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.059587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.059595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.059951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.060193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.060201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.060370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.060712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.060720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.061067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.061271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.061282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.061672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.062079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.062088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.062490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.062717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.062724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.063100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.063257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.063264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.063493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.063861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.063869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.064151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.064538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.064546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.064927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.065331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.065339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.065612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.065964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.065973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.066339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.066713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.066720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.067068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.067464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.067471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.067776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.068164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.068174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.068543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.068912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.068920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.069286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.069581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.069590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.069979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.070222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.070231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.070584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.070990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-05-15 19:46:40.070999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-05-15 19:46:40.071173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.071612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.071620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.071979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.072381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.072389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.072659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.073059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.073067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.073465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.073728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.073735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.074149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.074511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.074519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.074887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.075169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.075180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.075399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.075596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.075603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.075958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.076161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.076169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.076593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.076962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.076970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.077200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.077378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.077386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.077759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.078159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.078167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.078545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.078944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.078952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.079322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.079703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.079711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.079924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.080245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.080253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.080626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.080965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.080973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.081220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.081581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.081589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.081956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.082383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.082391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.082886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.083141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.083151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.083499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.083913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.083921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.084168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.084534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.084542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.084910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.085223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.085231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.085595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.085941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.085949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.086330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.086582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.086590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.086860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.087210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.087218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.087589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.087942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.087950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.088319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.088672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.088680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.089049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.089431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.089439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.089673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.089954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.089963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.090324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.090703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.090710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.090961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.091357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.091366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.091709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.092103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.092110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.092490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.092889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.092897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.093132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.093496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.093504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.093890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.094281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.094290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.094652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.095007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.095015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.095264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.095556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.095564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.095878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.096267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.096275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.096721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.097088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.097097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.097294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.097582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.097591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-05-15 19:46:40.097956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.098321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-05-15 19:46:40.098329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.098673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.099075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.099083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.099542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.099930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.099940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.100324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.100670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.100678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.101043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.101440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.101449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.101809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.102216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.102224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.102607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.102969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.102977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.103394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.103784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.103792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.104159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.104401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.104409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.104768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.105145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.105153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.105527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.105771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.105779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.106146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.106512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.106519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.106884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.107244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.107252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.107613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.107944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.107952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.108199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.108575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.108583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.108948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.109303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.109311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.109680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.109965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.109972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.110361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.110738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.110746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.111133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.111495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.111502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.111872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.112268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.112276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.112651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.113048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.113056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.113422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.113812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.113819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.114181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.114548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.114556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.114966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.115326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.115335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.115714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.115998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.116006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.116389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.116632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.116640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.117038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.117437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.117444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.117798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.118072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.118080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.118392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.118774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.118782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.119154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.119514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.119522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.119909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.120260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.120268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.120548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.120904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.120912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.121277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.121642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.121650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.121902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.122108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.122116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.122495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.122847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.122855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.123224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.123584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.123591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.123929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.124237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.124245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.124530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.124768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.124776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.125172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.125535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.125543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.125773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.126147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.126155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-05-15 19:46:40.126524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-05-15 19:46:40.126857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.126865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.127222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.127550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.127558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.127951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.128306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.128318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.128696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.128934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.128942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.129315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.129677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.129684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.130055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.130450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.130458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.130843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.131244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.131251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.131483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.131786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.131794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.132117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.132518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.132526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.132892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.133321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.133329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.133690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.133930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.133939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.134317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.134693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.134701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.135083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.135472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.135480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.135866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.136224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.136232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.136601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.136945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.136954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.137339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.137753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.137761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.138020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.138423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.138430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.138792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.139071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.139078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.139440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.139838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.139846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.140244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.140914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.140932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.141294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.141705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.141714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.142116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.142360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.142368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.142758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.143153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.143161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.143561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.143923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.143931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.144253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.144656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.144664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.145029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.145421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.145430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.145694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.146039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.146047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.146441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.146825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.146833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.147197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.147591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.147599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.147986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.148380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.148388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.148743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.149130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.149138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.149417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.149773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.149781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.150153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.150562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.150570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.150822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.151172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.151180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.151457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.151860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.151869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.152255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.152605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.152613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.153006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.153281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.153289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.153599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.154000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.154008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.154402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.155114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.155132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.155527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.156368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-05-15 19:46:40.156386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-05-15 19:46:40.156599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.156889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.156899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.157492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.157742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.157753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.158115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.158481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.158489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.158864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.159222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.159229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.159596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.159950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.159959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.160362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.160578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.160587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.160944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.161307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.161319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.161724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.162081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.162091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.162461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.162836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.162844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.163216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.163592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.163600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.163963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.164368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.164376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.164754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.164902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.164911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.165277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.165635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.165642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.166074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.166474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.166481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.166827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.167209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.167215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.167842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.168074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.168083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.168349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.168759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.168766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.169134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.169277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.169286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.169617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.169883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.169890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.170205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.170588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.170596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.170955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.171351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.171359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.171556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.171898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.171905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.172182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.172533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.172539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.172929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.173260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.173266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.173603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.173990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.173996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.174361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.174754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.174761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.175024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.175456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.175463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.175805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.176169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.176178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.176557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.176828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.176835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.177204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.177577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.177584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.177956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.178302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.178308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.178665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.179024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.179031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.179399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.179809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.179816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.180050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.180427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.180433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-05-15 19:46:40.180821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-05-15 19:46:40.181063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.181070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.181529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.181982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.181989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.182401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.182789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.182796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.183226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.183607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.183615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.183882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.184253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.184261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.184645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.185007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.185014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.185463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.185837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.185844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.186112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.186373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.186379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.186743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.186996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.187002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.187402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.187763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.187770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.188161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.188442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.188449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.188821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.189086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.189093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.189444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.189775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.189781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.189997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.190222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.190228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.190616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.191003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.191009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.191361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.191763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.191769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.192131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.192491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.192497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.192863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.193274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.193281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.193666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.194054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.194061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.194326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.194696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.194702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.195052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.195284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.195290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.195555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.195883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.195890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.196288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.196603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.196610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.196985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.197345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.197352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.197818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.198164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.198170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.198457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.198854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.198861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.199128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.199432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.199438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.199791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.200162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.200169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.200533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.200902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.200908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.201282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.201664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.201671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.202026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.202382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.202389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.202830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.203182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.203189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.203565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.203954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.203961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.204531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.204923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.204932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.205344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.205730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.205737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.206056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.206460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.206467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.206736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.207127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.207133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.207556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.207910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.207917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.208275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.208527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-05-15 19:46:40.208535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-05-15 19:46:40.208860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.209228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.209235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.209579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.209951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.209957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.210307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.210690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.210697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.211057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.211420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.211427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.211687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.212064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.212071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.212333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.212679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.212686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.213077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.213430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.213436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.213805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.214134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.214141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.214503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.214872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.214879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.215232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.215571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.215579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.215929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.216197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.216204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.216565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.216952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.216958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.217334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.217683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.217690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.217960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.218298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.218304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.218675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.219051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.219058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.219479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.219662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.219669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.220039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.220440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.220447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.220816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.221176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.221182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.221543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.221814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.221821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.222194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.222547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.222553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.222908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.223112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.223119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.223532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.223779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.223786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.223958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.224301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.224308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.224627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.224996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.225003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.225391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.225751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.225757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.226121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.226479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.226486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.226846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.227199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.227206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.227549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.227909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.227916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.228284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.228560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.228567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.228924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.229275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.229282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.229671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.229946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.229953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.230319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.230695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.230702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.230928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.231219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.231226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.231595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.231986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.231992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.232298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.232647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.232654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.232881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.233268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.233274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.233612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.233980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.233988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.234379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.234791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.234797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.235147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.235500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.235508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-05-15 19:46:40.235764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-05-15 19:46:40.235955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.235962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.081 qpair failed and we were unable to recover it. 00:31:14.081 [2024-05-15 19:46:40.236343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.236573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.236580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.081 qpair failed and we were unable to recover it. 00:31:14.081 [2024-05-15 19:46:40.236920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.237276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.237284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.081 qpair failed and we were unable to recover it. 00:31:14.081 [2024-05-15 19:46:40.237684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.238106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.238112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.081 qpair failed and we were unable to recover it. 00:31:14.081 [2024-05-15 19:46:40.238461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.238839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.238846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.081 qpair failed and we were unable to recover it. 00:31:14.081 [2024-05-15 19:46:40.239117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.239391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.239398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.081 qpair failed and we were unable to recover it. 00:31:14.081 [2024-05-15 19:46:40.239781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.240131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.240138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.081 qpair failed and we were unable to recover it. 00:31:14.081 [2024-05-15 19:46:40.240498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.240693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.240700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.081 qpair failed and we were unable to recover it. 00:31:14.081 [2024-05-15 19:46:40.241087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.241324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.241331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.081 qpair failed and we were unable to recover it. 00:31:14.081 [2024-05-15 19:46:40.241681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.242039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.242046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.081 qpair failed and we were unable to recover it. 00:31:14.081 [2024-05-15 19:46:40.242425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.242812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.242819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.081 qpair failed and we were unable to recover it. 00:31:14.081 [2024-05-15 19:46:40.243207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.243541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.243548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.081 qpair failed and we were unable to recover it. 00:31:14.081 [2024-05-15 19:46:40.243917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.244244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.244250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.081 qpair failed and we were unable to recover it. 00:31:14.081 [2024-05-15 19:46:40.244528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.244875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.244881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.081 qpair failed and we were unable to recover it. 00:31:14.081 [2024-05-15 19:46:40.245225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.245404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.245411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.081 qpair failed and we were unable to recover it. 00:31:14.081 [2024-05-15 19:46:40.245864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.246215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.246221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.081 qpair failed and we were unable to recover it. 00:31:14.081 [2024-05-15 19:46:40.246552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.246830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-05-15 19:46:40.246836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.081 qpair failed and we were unable to recover it. 00:31:14.081 [2024-05-15 19:46:40.247286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.351 [2024-05-15 19:46:40.247633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.351 [2024-05-15 19:46:40.247641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.351 qpair failed and we were unable to recover it. 00:31:14.351 [2024-05-15 19:46:40.248005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.351 [2024-05-15 19:46:40.248404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.351 [2024-05-15 19:46:40.248412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.351 qpair failed and we were unable to recover it. 00:31:14.351 [2024-05-15 19:46:40.248802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.351 [2024-05-15 19:46:40.249004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.351 [2024-05-15 19:46:40.249011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.351 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.249378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.249777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.249784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.250135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.250487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.250494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.250885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.251264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.251271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.251667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.252054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.252060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.252414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.252795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.252801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.253188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.253551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.253558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.253947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.254339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.254347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.254738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.255094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.255100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.255491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.255869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.255875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.256184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.256547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.256553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.256898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.257256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.257262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.257451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.257795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.257801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.258157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.258391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.258398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.258760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.259123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.259129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.259482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.259832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.259846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.260112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.260544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.260551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.260938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.261293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.261300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.261657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.261997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.262004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.262350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.262740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.262747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.263101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.263511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.263518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.263883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.264242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.264248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.264604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.264991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.264997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.265385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.265739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.265745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.266132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.266400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.266407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.266670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.267068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.267074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.267298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.267639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.267646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.268067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.268416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.268423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.268794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.269190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.269197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.269430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.269831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.269837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.352 qpair failed and we were unable to recover it. 00:31:14.352 [2024-05-15 19:46:40.270187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.352 [2024-05-15 19:46:40.270547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.270553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.270901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.271264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.271270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.271628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.271892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.271898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.272260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.272612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.272618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.273007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.273269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.273275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.273627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.273961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.273968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.274345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.274714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.274721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.275074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.275428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.275437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.275801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.276182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.276189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.276585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.276986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.276993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.277380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.277757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.277763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.278145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.278505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.278512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.278859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.279154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.279160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.279530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.279880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.279886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.280243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.280664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.280670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.281031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.281235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.281242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.281609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.281991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.281997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.282269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.282466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.282475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.282837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.283238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.283244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.283521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.283848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.283854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.284242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.284570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.284576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.284961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.285310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.285323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.285690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.286038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.286044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.286437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.286770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.286776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.287158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.287480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.287486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.287830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.288123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.288129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.288522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.288896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.288902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.289197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.289596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.289604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.289864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.290224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.290230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.353 [2024-05-15 19:46:40.290632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.290988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.353 [2024-05-15 19:46:40.290994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.353 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.291364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.291724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.291731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.292078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.292417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.292423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.292790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.293154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.293161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.293420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.293677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.293684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.294030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.294394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.294401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.294824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.295051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.295066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.295432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.295663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.295671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.296068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.296340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.296348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.296601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.296964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.296970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.297322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.297674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.297681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.297855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.298199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.298205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.298582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.298841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.298847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.299109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.299473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.299480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.299932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.300260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.300267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.300541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.300937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.300944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.301234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.301516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.301522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.301909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.302251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.302257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.302650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.302975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.302982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.303355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.303734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.303741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.304127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.304489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.304495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.304856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.305049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.305056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.305433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.305783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.305790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.306177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.306547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.306554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.306853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.307140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.307146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.307321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.307711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.307718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.308107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.308461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.308468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.308856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.309212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.309219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.309548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.309909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.309915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.310286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.310645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.354 [2024-05-15 19:46:40.310652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.354 qpair failed and we were unable to recover it. 00:31:14.354 [2024-05-15 19:46:40.311031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.311400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.311407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.311633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.311982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.311988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.312256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.312617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.312623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.313009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.313400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.313407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.313795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.314146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.314152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.314401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.314824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.314830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.315155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.315494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.315501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.315850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.316129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.316136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.316398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.316790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.316796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.317067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.317401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.317407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.317804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.318189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.318195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.318595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.318978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.318984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.319473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.319848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.319854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.320092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.320479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.320487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.320851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.321134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.321141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.321498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.321814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.321820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.322186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.322551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.322558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.322909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.323266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.323273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.323621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.323968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.323974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.324314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.324566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.324573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.324846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.325202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.325208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.325638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.325957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.325963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.326324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.326694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.326701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.327088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.327436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.327443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.327774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.328166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.328172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.328415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.328814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.328821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.329167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.329544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.329551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.329906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.330292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.330298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.330710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.331055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.331061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.355 qpair failed and we were unable to recover it. 00:31:14.355 [2024-05-15 19:46:40.331550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.331936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.355 [2024-05-15 19:46:40.331945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.332339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.332729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.332736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.333147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.333309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.333325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.333680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.334036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.334043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.334413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.334668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.334675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.335073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.335456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.335463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.335850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.336204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.336210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.336591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.336749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.336756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.337162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.337498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.337506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.337873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.338137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.338144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.338319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.338689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.338696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.338952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.339343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.339350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.339610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.339999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.340006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.340357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.340746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.340752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.341093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.341390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.341397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.341769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.342161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.342167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.342390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.342768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.342775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.343125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.343515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.343522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.343776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.343975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.343983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.344366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.344651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.344657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.345065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.345458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.345464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.345756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.346131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.346137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.346590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.346980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.346986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.347341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.347744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.347751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.356 qpair failed and we were unable to recover it. 00:31:14.356 [2024-05-15 19:46:40.348062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.356 [2024-05-15 19:46:40.348429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.348436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.348629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.349039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.349045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.349391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.349612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.349619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.349888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.350247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.350254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.350620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.350925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.350932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.351322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.351673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.351680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.352030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.352408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.352415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.352770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.353114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.353121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.353363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.353760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.353767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.354153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.354475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.354481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.354830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.355186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.355192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.355491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.355676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.355682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.355912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.356301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.356308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.356626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.357014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.357021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.357219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.357594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.357601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.357945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.358190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.358196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.358454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.358807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.358814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.359174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.359486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.359493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.359768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.359990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.359997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.360392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.360789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.360796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.361142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.361504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.361510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.361887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.362204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.362210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.362557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.362827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.362834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.363030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.363361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.363368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.363742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.364108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.364115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.364469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.364835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.364842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.365132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.365530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.365536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.365889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.366209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.366221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.366607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.366982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.366988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.367172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.367493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.367499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.357 qpair failed and we were unable to recover it. 00:31:14.357 [2024-05-15 19:46:40.367844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.357 [2024-05-15 19:46:40.368166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.368172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.368355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.368687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.368693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.369036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.369285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.369291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.369654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.370045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.370051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.370316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.370703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.370709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.371092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.371573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.371601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.371978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.372324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.372331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.372702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.373059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.373065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.373332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.373700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.373706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.374094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.374324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.374332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.374764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.375067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.375073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.375391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.375738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.375745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.376133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.376535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.376541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.376932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.377285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.377291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.377662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.378049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.378055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.378418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.378802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.378808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.379104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.379465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.379473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.379835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.380025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.380031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.380408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.380758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.380764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.381113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.381458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.381465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.381702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.382062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.382068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.382429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.382812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.382819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.383111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.383485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.383492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.383836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.384207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.384214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.384599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.384939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.384946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.385319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.385681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.385688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.386080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.386487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.386495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.386854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.387216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.387222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.387593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.387974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.387980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.358 [2024-05-15 19:46:40.388343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.388718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.358 [2024-05-15 19:46:40.388724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.358 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.389079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.389365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.389372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.389805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.389998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.390006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.390357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.390792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.390798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.391139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.391389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.391396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.391733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.392106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.392112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.392483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.392853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.392859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.393233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.393573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.393582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.393927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.394319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.394326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.394584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.394962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.394968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.395375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.395706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.395713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.396025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.396425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.396432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.396800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.397169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.397175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.397569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.397877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.397884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.398251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.398537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.398544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.398922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.399292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.399298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.399647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.400025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.400031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.400379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.400758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.400765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.401110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.401484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.401492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.401736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.402109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.402115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.402485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.402903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.402909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.403273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.403668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.403675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.404060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.404287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.404293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.404532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.404887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.404894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.405263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.405668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.405676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.406051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.406413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.406419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.406634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.407032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.407037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.407319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.407632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.407640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.407990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.408370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.408377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.408643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.409019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.409025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.359 qpair failed and we were unable to recover it. 00:31:14.359 [2024-05-15 19:46:40.409409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.409766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.359 [2024-05-15 19:46:40.409773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.410038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.410446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.410453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.410714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.411080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.411086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.411429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.411820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.411826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.412178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.412547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.412553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.412956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.413164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.413171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.413547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.413898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.413904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.414274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.414590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.414597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.414964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.415156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.415163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.415611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.415962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.415969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.416353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.416693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.416699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.417067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.417297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.417304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.417591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.417917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.417924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.418306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.418694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.418701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.418981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.419360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.419366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.419603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.419965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.419971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.420157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.420484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.420491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.420865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.421257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.421263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.421654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.422004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.422010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.422180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.422556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.422563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.422939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.423299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.423305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.423689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.424094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.424100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.424547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.424963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.424972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.425365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.425730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.425737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.426130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.426547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.426555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.360 qpair failed and we were unable to recover it. 00:31:14.360 [2024-05-15 19:46:40.426898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.360 [2024-05-15 19:46:40.427269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.427275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.427497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.427860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.427866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.428211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.428557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.428564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.428959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.429121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.429128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.429543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.429893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.429899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.430290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.430614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.430620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.431016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.431359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.431366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.431723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.432100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.432106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.432463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.432837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.432843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.433189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.433436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.433443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.433794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.434149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.434156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.434527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.434888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.434894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.435282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.435594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.435602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.435871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.436187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.436194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.436581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.436922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.436929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.437291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.437477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.437483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.437829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.438206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.438212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.438554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.438944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.438950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.439342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.439697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.439704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.439947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.440288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.440295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.440689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.441085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.441092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.441289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.441629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.441635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.442020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.442372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.442379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.442764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.443130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.443136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.443498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.443827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.443835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.444197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.444521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.444527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.444881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.445296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.445302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.445657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.446019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.446025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.446502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.446966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.446975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.447370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.447717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.361 [2024-05-15 19:46:40.447724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.361 qpair failed and we were unable to recover it. 00:31:14.361 [2024-05-15 19:46:40.448100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.448462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.448469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.448822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.449177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.449184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.449384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.449736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.449742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.450128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.450502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.450509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.450900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.451129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.451136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.451500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.451881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.451887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.452237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.452524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.452531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.452940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.453241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.453247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.453608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.453980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.453986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.454374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.454725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.454731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.455118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.455481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.455488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.455850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.456229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.456236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.456426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.456690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.456696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.457099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.457495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.457501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.457850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.458183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.458190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.458576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.458923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.458929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.459198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.459594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.459600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.459956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.460302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.460308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.460760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.461028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.461034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.461512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.461968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.461977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.462324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.462705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.462712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.463127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.463455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.463462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.463826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.464188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.464196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.464348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.464687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.464694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.465087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.465489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.465496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.465830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.466218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.466224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.466592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.466971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.466978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.467251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.467497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.467504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.467881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.468238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.468245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.362 qpair failed and we were unable to recover it. 00:31:14.362 [2024-05-15 19:46:40.468632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.469032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.362 [2024-05-15 19:46:40.469038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.469393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.469662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.469669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.470065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.470423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.470430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.470779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.471131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.471137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.471396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.471766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.471772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.472116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.472364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.472372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.472782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.473143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.473150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.473495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.473817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.473823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.474175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.474534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.474540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.474903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.475277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.475283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.475630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.475987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.475993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.476360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.476688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.476694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.477083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.477353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.477360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.477722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.478073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.478079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.478437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.478830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.478837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.479202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.479544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.479551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.479902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.480264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.480271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.480630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.480964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.480970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.481213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.481517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.481524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.481924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.482283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.482289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.482639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.483036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.483043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.483402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.483798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.483804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.484066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.484404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.484410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.484700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.484919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.484925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.485227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.485638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.485645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.485993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.486355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.486361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.486740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.487107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.487113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.487466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.487847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.487853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.488226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.488577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.488584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.488969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.489319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.489326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.363 qpair failed and we were unable to recover it. 00:31:14.363 [2024-05-15 19:46:40.489701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.363 [2024-05-15 19:46:40.489965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.489972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.490346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.490699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.490705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.491074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.491235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.491242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.491506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.491862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.491868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.492121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.492500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.492508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.492731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.493129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.493135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.493480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.493812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.493819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.494207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.494593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.494600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.494946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.495135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.495142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.495488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.495884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.495890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.496133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.496483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.496489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.496848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.497135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.497142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.497524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.497881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.497887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.498281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.498645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.498652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.499027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.499377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.499385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.499647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.499891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.499898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.500262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.500613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.500619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.500995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.501354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.501361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.501763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.502120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.502127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.502475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.502826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.502832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.503177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.503564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.503570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.503943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.504301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.504307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.504663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.505012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.505019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.505383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.505748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.505754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.505911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.506329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.506337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.506603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.506979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.506985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.507342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.507700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.507706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.507847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.508162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.508177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.508548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.508864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.508870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.509233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.509640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.364 [2024-05-15 19:46:40.509646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.364 qpair failed and we were unable to recover it. 00:31:14.364 [2024-05-15 19:46:40.509997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.510384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.510391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.510761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.511131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.511137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.511503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.511823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.511829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.512186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.512546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.512553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.512921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.513242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.513250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.513693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.514039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.514045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.514421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.514788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.514795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.515166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.515323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.515331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.515665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.516061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.516067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.516416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.516777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.516783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.517134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.517490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.517497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.517677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.517925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.517931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.518275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.518623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.518630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.519019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.519385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.519391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.519742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.520038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.520050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.520419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.520814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.520821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.521207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.521576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.521582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.521971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.522334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.522341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.522678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.523064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.523070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.523445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.523716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.523723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.524091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.524356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.524363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.524612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.524962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.524968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.525332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.525700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.525707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.526070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.526431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.365 [2024-05-15 19:46:40.526438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.365 qpair failed and we were unable to recover it. 00:31:14.365 [2024-05-15 19:46:40.526823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.527185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.527192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.527666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.528011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.528018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.528370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.528655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.528661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.529005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.529385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.529393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.529744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.530093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.530100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.530462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.530829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.530836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.531027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.531351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.531358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.531721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.532109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.532116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.532383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.532757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.532764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.533015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.533392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.533398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.533784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.534176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.534182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.534547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.534928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.534935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.535299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.535700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.535708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.536094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.536489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.536496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.536850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.537007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.537013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.537335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.537711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.537717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.538007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.538382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.538388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.538741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.539135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.539142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.539509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.539876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.539882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.540276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.540610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.540616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.540971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.541187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.541194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.541558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.541902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.541908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.542253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.542635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.542641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.543021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.543403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.543409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.636 [2024-05-15 19:46:40.543756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.544114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.636 [2024-05-15 19:46:40.544121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.636 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.544316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.544658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.544666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.545026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.545376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.545383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.545652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.546009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.546016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.546255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.546466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.546472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.546851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.547088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.547095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.547458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.547813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.547819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.548102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.548447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.548453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.548851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.549248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.549254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.549626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.550019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.550025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.550300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.550590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.550597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.550989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.551356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.551362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.551713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.552077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.552083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.552335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.552696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.552702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.553117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.553484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.553491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.553869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.554188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.554196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.554471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.554819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.554825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.555176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.555381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.555388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.555773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.556147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.556153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.556533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.556888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.556894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.557284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.557609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.557616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.557961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.558324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.558330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.558685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.559078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.559084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.559348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.559597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.559603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.559865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.560262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.560268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.560453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.560743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.560750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.561023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.561280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.561287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.561645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.562005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.562011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.562361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.562748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.562755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.563138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.563380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.563387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.637 qpair failed and we were unable to recover it. 00:31:14.637 [2024-05-15 19:46:40.563787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.637 [2024-05-15 19:46:40.564134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.564140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.564479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.564819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.564826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.565224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.565531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.565537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.565897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.566262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.566268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.566633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.567011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.567017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.567401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.567743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.567750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.568095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.568443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.568451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.568798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.569161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.569168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.569527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.569873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.569879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.570277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.570616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.570623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.571011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.571363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.571370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.571748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.571974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.571981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.572232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.572584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.572590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.572778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.573100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.573107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.573474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.573783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.573789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.574154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.574540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.574547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.574895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.575290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.575296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.575695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.576052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.576058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.576426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.576769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.576776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.577034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.577229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.577236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.577607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.577921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.577927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.578303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.578656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.578662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.578967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.579338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.579345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.579736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.580103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.580109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.580539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.580890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.580897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.581285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.581636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.581643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.581997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.582377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.582384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.582742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.583119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.583125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.583472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.583887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.583893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.638 [2024-05-15 19:46:40.584238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.584585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.638 [2024-05-15 19:46:40.584593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.638 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.584946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.585176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.585183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.585569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.585898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.585905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.586274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.586482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.586489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.586888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.587184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.587191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.587541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.587931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.587937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.588322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.588676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.588683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.589019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.589381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.589388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.589663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.590026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.590033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.590421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.590784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.590791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.591030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.591393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.591400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.591783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.592028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.592035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.592389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.592765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.592772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.593132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.593481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.593488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.593846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.594199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.594205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.594547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.594912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.594918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.595311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.595716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.595722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.596114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.596545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.596552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.596810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.597205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.597211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.597569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.597951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.597958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.598328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.598583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.598589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.598927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.599272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.599280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.599648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.599998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.600004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.600196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.600554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.600561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.600908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.601240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.601247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.601495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.601851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.601857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.602293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.602614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.602622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.602987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.603338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.603345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.603753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.604103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.604112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.604504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.604915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.604921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.639 qpair failed and we were unable to recover it. 00:31:14.639 [2024-05-15 19:46:40.605276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.639 [2024-05-15 19:46:40.605602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.605616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.605982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.606333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.606340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.606724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.607081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.607087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.607478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.607837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.607844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.608246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.608619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.608626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.609015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.609373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.609380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.609721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.609941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.609947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.610295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.610585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.610592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.610924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.611327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.611335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.611548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.611954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.611960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.612305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.612666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.612673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.612953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.613362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.613369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.613773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.614080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.614087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.614495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.614851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.614857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.615102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.615440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.615446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.615796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.616134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.616141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.616409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.616795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.616802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.617153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.617515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.617522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.617923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.618274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.618282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.618633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.618990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.618997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.619384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.619736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.619743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.620015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.620219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.620226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.620458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.620790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.620797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.640 [2024-05-15 19:46:40.621176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.621428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.640 [2024-05-15 19:46:40.621435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.640 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.621788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.622181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.622187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.622612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.623044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.623051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.623390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.623761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.623768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.624124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.624427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.624434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.624836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.625231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.625240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.625649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.625989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.625996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.626360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.626747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.626754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.627052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.627333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.627340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.627694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.628056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.628063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.628456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.628831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.628838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.629213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.629584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.629591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.630009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.630255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.630262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.630550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.630933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.630940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.631235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.631489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.631496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.631858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.632243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.632250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.632490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.632884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.632891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.633035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.633445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.633453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.633809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.634168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.634174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.634478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.634907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.634913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.635254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.635420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.635427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.635755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.636154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.636160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.636518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.636897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.636904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.637258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.637501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.637507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.637796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.638155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.638162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.638545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.638809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.638816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.639184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.639362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.639369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.639699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.640051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.640057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.641 qpair failed and we were unable to recover it. 00:31:14.641 [2024-05-15 19:46:40.640416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.641 [2024-05-15 19:46:40.640831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.640838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.641199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.641585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.641592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.641936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.642306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.642322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.642674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.643060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.643066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.643581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.644012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.644021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.644290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.644535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.644543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.644904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.645246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.645253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.645632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.645900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.645907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.646329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.646668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.646674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.647059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.647424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.647430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.647788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.648147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.648154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.648525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.648918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.648924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.649316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.649712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.649719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.650072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.650470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.650477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.650833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.651196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.651202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.651548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.651917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.651923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.652268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.652615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.652622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.653010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.653401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.653408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.653677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.654044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.654050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.654310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.654691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.654698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.655043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.655280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.655288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.655563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.655916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.655923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.656299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.656694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.656702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.657017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.657368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.657374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.657718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.658120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.658126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.658523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.658883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.658889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.659316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.659540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.659547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.659937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.660300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.660306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.642 qpair failed and we were unable to recover it. 00:31:14.642 [2024-05-15 19:46:40.660653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.642 [2024-05-15 19:46:40.661004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.661011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.661399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.661765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.661772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.662168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.662565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.662571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.662923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.663256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.663262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.663631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.663991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.663997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.664386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.664783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.664789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.665057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.665425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.665431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.665728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.665952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.665959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.666302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.666660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.666666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.667017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.667375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.667383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.667768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.667998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.668005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.668326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.668683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.668689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.669015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.669401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.669407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.669835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.670183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.670190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.670586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.670966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.670973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.671225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.671589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.671596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.671946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.672291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.672298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.672671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.672877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.672884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.673147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.673542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.673549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.673894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.674274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.674280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.674661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.674863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.674871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.675207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.675591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.675598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.675985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.676343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.676350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.676545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.676901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.676908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.677275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.677508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.677515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.677923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.678281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.678288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.678676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.679040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.679047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.679417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.679612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.679619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.643 qpair failed and we were unable to recover it. 00:31:14.643 [2024-05-15 19:46:40.679913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.643 [2024-05-15 19:46:40.680267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.680274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.680629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.680912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.680919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.681336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.681674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.681680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.682028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.682392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.682399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.682642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.683032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.683038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.683399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.683797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.683803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.684143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.684503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.684510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.684770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.685081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.685087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.685453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.685833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.685840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.686183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.686545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.686551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.686900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.687252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.687259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.687536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.687916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.687923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.688174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.688415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.688422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.688708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.689078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.689084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.689437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.689837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.689843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.690177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.690555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.690561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.690814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.691213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.691220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.691590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.691779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.691786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.692107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.692525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.692531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.692786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.693171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.693177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.693449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.693823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.693830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.694179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.694419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.694426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.694807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.695188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.695195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.695242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.695636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.695643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.696018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.696305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.696311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.696690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.697039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.697046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.697488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.697858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.697864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.698115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.698474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.698480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.644 [2024-05-15 19:46:40.698818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.699148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.644 [2024-05-15 19:46:40.699154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.644 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.699551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.699900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.699907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.700272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.700627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.700633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.700960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.701342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.701349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.701742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.702008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.702016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.702409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.702755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.702761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.703117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.703486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.703492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.703917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.704297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.704303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.704665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.705027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.705033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.705368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.705614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.705620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.705963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.706366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.706373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.706562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.706883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.706889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.707165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.707523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.707529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.707912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.708310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.708323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.708703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.709054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.709061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.709471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.709785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.709791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.710157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.710508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.710514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.710903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.711289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.711295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.711654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.712024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.712030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.712407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.712610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.712617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.713022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.713401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.713408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.713776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.714158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.714165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.714552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.714955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.714962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.645 [2024-05-15 19:46:40.715335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.715673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.645 [2024-05-15 19:46:40.715679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.645 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.716026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.716292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.716300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.716665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.717019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.717025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.717370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.717772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.717778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.718145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.718495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.718501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.718894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.719292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.719299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.719664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.720021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.720027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.720285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.720514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.720521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.720874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.721230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.721237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.721605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.721923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.721930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.722308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.722671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.722677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.723021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.723378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.723386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.723754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.724025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.724031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.724380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.724757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.724763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.724936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.725299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.725306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.725665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.725970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.725978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.726364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.726690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.726696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.727061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.727445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.727452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.727806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.728185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.728191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.728558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.728955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.728962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.729324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.729776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.729783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.730091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.730480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.730488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.730959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.731319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.731326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.731657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.732046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.732052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.732268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.732581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.732587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.732979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.733304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.733311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.733691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.734026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.734033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.734411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.734763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.734770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.735117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.735505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.735511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.646 qpair failed and we were unable to recover it. 00:31:14.646 [2024-05-15 19:46:40.735760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.646 [2024-05-15 19:46:40.736112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.736118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.736372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.736610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.736618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.736853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.737212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.737220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.737614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.737970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.737977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.738344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.738704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.738710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.739048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.739283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.739290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.739483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.739806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.739812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.740196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.740513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.740520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.740888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.741271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.741277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.741546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.741935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.741942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.742309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.742694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.742701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.742962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.743196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.743203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.743596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.743948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.743954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.744316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.744710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.744717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.745035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.745221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.745228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.745591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.745967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.745974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.746318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.746672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.746679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.747069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.747423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.747430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.747805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.748166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.748172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.748524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.748714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.748721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.749095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.749480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.749487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.749839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.750181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.750187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.750578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.750970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.750976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.751323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.751676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.751682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.752083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.752303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.752311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.752673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.752954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.752961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.753335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.753741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.753747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.753972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.754389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.754395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.647 qpair failed and we were unable to recover it. 00:31:14.647 [2024-05-15 19:46:40.754744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.647 [2024-05-15 19:46:40.754971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.754978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.755351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.755764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.755770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.756158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.756361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.756369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.756652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.757005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.757011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.757367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.757639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.757646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.758016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.758274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.758281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.758729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.759040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.759046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.759434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.759661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.759668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.760014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.760394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.760401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.760821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.761094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.761101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.761475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.761814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.761821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.762163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.762519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.762525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.762914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.763262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.763269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.763595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.763938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.763944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.764133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.764486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.764493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.764872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.765247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.765253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.765636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.765904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.765911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.766297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.766683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.766689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.767078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.767381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.767388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.767741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.768128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.768134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.768510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.768902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.768908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.769266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.769644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.769650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.769986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.770299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.770306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.770678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.771029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.771036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.771522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.771909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.771918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.772278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.772630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.772638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.773053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.773584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.773611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.774037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.774437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.774444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.648 [2024-05-15 19:46:40.774717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.775104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.648 [2024-05-15 19:46:40.775110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.648 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.775456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.775839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.775845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.776253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.776630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.776638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.776997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.777357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.777363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.777712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.778066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.778073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.778465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.778844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.778850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.779235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.779596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.779603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.779826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.780188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.780195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.780497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.780864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.780870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.781213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.781585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.781591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.781968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.782333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.782340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.782829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.783191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.783197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.783552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.783782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.783789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.783966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.784310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.784320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.784585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.784968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.784975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.785352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.785688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.785695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.786037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.786416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.786422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.786610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.786934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.786941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.787312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.787668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.787675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.788057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.788391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.788398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.788756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.789141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.789147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.789543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.789927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.789934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.790321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.790505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.790512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.790925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.791123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.791129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.791505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.791870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.791876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.649 [2024-05-15 19:46:40.792263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.792545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.649 [2024-05-15 19:46:40.792552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.649 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.792904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.793073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.793080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.793440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.793795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.793802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.794108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.794473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.794480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.794868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.795166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.795181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.795541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.795898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.795904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.796257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.796603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.796610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.796974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.797320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.797327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.797686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.797932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.797939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.798276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.798493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.798500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.798833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.799102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.799109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.799487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.799863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.799869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.800064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.800497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.800504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.800678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.801025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.801031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.801416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.801782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.801788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.802136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.802478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.802486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.802716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.803066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.803073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.803436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.803796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.803802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.804257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.804452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.804460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.804789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.805183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.805189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.805416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.805787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.805794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.806124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.806405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.806412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.806724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.807093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.807100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.807451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.807826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.807833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.808045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.808409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.808416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.808685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.809063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.809069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.809394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.809713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.809719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.810086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.810338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.810344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.810710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.811138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.650 [2024-05-15 19:46:40.811145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.650 qpair failed and we were unable to recover it. 00:31:14.650 [2024-05-15 19:46:40.811519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.811912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.811920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.812336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.812703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.812709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.812976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.813291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.813297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.813662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.813927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.813933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.814297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.814649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.814656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.815047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.815374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.815381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.815772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.816168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.816174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.816597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.816960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.816966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.817324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.817719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.817726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.818090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.818590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.818617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.818988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.819158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.819167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.819526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.819937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.819944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.820319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.820691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.820697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.821068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.821535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.821566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.821933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.822291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.822298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.822648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.823032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.823038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.823228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.823598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.823606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.823950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.824304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.824311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.824666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.825060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.825066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.825415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.825633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.825640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.825912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.826254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.826260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.826619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.826992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.826999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.827370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.827754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.827760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.921 qpair failed and we were unable to recover it. 00:31:14.921 [2024-05-15 19:46:40.827960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.828214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.921 [2024-05-15 19:46:40.828223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.828593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.828947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.828953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.829381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.829657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.829663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.830047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.830449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.830456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.830842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.831042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.831049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.831386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.831736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.831742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.832088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.832371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.832377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.832757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.833115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.833121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.833465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.833908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.833914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.834256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.834588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.834595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.834949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.835344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.835353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.835727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.835887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.835894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.836317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.836701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.836708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.837052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.837415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.837422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.837792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.838160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.838166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.838552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.838941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.838947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.839300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.839657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.839664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.840031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.840529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.840556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.840920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.841278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.841285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.841538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.841929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.841936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.842140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.842535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.842545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.842937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.843274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.843280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.843649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.843998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.844004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.844231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.844568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.844575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.844933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.845291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.845297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.845568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.845849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.845855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.846244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.846634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.846640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.846883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.847234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.847241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.847596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.847990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.847997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.848365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.848710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.848717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.849087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.849251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.849258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.849623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.850020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.850027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.922 [2024-05-15 19:46:40.850415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.850789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.922 [2024-05-15 19:46:40.850795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.922 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.851104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.851267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.851273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.851572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.851960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.851966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.852319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.852687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.852693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.853078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.853476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.853484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.853851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.854184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.854190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.854577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.854821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.854827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.855192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.855664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.855692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.856054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.856444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.856452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.856822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.857128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.857134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.857491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.857877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.857884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.858249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.858579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.858585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.858933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.859309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.859325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.859661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.859967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.859973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.860363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.860701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.860708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.861077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.861511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.861518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.861868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.862204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.862211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.862596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.862839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.862847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.863115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.863544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.863551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.863918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.864273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.864279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.864722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.865071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.865077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.865408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.865705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.865711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.866097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.866442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.866450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.866779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.867025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.867032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.867429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.867735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.867741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.868120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.868481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.868488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.868841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.869232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.869238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.869574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.869957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.869963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.870304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.870696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.870703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.871056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.871415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.871422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.871773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.872134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.872140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.872498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.872873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.872881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.873269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.873554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.873561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.873924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.874119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.874128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.874501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.874861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.874868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.875229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.875475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.875482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.923 [2024-05-15 19:46:40.875849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.876239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.923 [2024-05-15 19:46:40.876246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.923 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.876438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.876697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.876704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.877096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.877395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.877403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.877782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.878135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.878141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.878487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.878848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.878856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.879049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.879431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.879438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.879787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.880184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.880191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.880591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.881003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.881010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.881367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.881744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.881750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.881974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.882375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.882382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.882586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.882920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.882926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.883320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.883675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.883681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.884034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.884436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.884443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.884798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.885195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.885202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.885596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.885958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.885965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.886333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.886667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.886676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.887064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.887461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.887470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.887824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.888096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.888103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.888492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.888670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.888678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.889062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.889420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.889428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.889819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.890219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.890226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.890592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.890990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.890997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.891367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.891742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.891749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.892125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.892502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.892509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.892864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.893259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.893266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.893624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.894025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.894032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.894282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.894619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.894627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.894998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.895400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.895407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.895604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.895991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.895998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.896366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.896775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.896783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.897178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.897444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.897451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.897830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.898232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.898239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.898635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.899039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.899046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.899355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.899753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.899760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.900132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.900506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.900513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.924 qpair failed and we were unable to recover it. 00:31:14.924 [2024-05-15 19:46:40.900885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.901281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.924 [2024-05-15 19:46:40.901289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.901664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.902063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.902070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.902437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.902831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.902838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.903213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.903553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.903560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.903902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.904271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.904277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.904680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.905084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.905091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.905476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.905570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.905576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.906021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.906387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.906394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.906743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.907137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.907143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.907500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.907784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.907790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.908374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.908743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.908749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.908994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.909403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.909410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.909817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.910181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.910188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.910471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.910740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.910746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.911083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.911454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.911460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.911825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.912164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.912170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.912537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.912813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.912819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.913248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.913537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.913544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.913923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.914260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.914267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.914542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.914821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.914827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.915190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.915560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.915566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.915946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.916335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.916342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.916770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.917136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.917143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.917516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.917861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.917867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.918217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.918590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.918597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.919015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.919367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.919374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.919743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.920126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.920132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.920515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.920907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.920913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.921174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.921544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.921551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.921935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.922324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.925 [2024-05-15 19:46:40.922332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.925 qpair failed and we were unable to recover it. 00:31:14.925 [2024-05-15 19:46:40.922697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.923089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.923095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.923550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.923851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.923859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.924229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.924590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.924597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.924982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.925335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.925341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.925565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.925935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.925941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.926327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.926694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.926700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.927064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.927425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.927432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.927724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.928073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.928079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.928424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.928817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.928823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.929170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.929549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.929555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.929901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.930278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.930285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.930626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.930909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.930915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.931291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.931665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.931672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.931895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.932252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.932258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.932654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.933011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.933017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.933390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.933634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.933640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.934070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.934345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.934352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.934620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.934971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.934977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.935365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.935784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.935792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.936178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.936527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.936533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.936784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.937130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.937137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.937523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.937877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.937883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.938158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.938547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.938554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.938900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.939262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.939269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.939409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.939779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.939785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.940061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.940427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.940434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.940786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.941173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.941179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.941569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.941918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.941924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.942201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.942541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.942551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.942843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.943112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.943119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.943483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.943843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.943849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.944216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.944586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.944592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.944861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.945137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.945143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.945492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.945857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.945864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.946211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.946505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.946512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.926 qpair failed and we were unable to recover it. 00:31:14.926 [2024-05-15 19:46:40.946858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.926 [2024-05-15 19:46:40.947216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.947222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.947592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.947801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.947808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.948159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.948481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.948488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.948855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.949188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.949197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.949563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.949954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.949961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.950346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.950684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.950690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.950949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.951181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.951188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.951547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.951919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.951925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.952289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.952656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.952663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.952986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.953381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.953388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.953775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.954150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.954156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.954501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.954899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.954905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.955254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.955644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.955650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.955931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.956363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.956371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.956759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.957089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.957095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.957467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.957850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.957857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.958133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.958477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.958483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.958849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.959056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.959063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.959479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.959857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.959864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.960265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.960646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.960653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.961024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.961369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.961375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.961660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.962019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.962026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.962428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.962686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.962692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.962925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.963301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.963307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.963705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.964059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.964066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.964364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.964595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.964601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.964974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.965337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.965344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.965685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.966082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.966088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.966497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.966920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.966927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.967288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.967602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.967615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.967982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.968334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.968341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.968677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.969054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.969061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.969407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.969760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.969767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.970136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.970475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.970482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.970862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.971138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.971144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.971515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.971743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.971749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.971915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.972314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.927 [2024-05-15 19:46:40.972321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.927 qpair failed and we were unable to recover it. 00:31:14.927 [2024-05-15 19:46:40.972623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.972974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.972980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.973204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.973575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.973582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.973974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.974349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.974356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.974736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.975128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.975134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.975477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.975688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.975695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.975954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.976327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.976334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.976671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.977036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.977042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.977429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.977729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.977735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.978106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.978438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.978444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.978806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.979183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.979190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.979426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.979810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.979816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.980168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.980508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.980515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.980873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.981252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.981259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.981532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.981921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.981927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.982284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.982657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.982664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.983022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.983386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.983393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.983577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.983906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.983912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.984134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.984383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.984390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.984802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.985163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.985170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.985544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.985907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.985913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.986276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.986531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.986537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.986862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.987251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.987257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.987651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.988046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.988052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.988388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.988757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.988763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.988911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.989234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.989240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.989602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.989972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.989979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.990329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.990672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.990679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.990876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.991149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.991155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.991529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.991882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.991888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.992116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.992481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.992488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.928 [2024-05-15 19:46:40.992834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.993190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.928 [2024-05-15 19:46:40.993196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.928 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:40.993549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:40.993890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:40.993896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:40.994159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:40.994538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:40.994545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:40.994905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:40.995264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:40.995271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:40.995532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:40.995910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:40.995916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:40.996180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:40.996543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:40.996549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:40.996786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:40.997179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:40.997185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:40.997579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:40.997929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:40.997935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:40.998324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:40.998698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:40.998704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:40.999098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:40.999438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:40.999445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:40.999801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.000162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.000168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.000559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.000963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.000970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.001335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.001678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.001684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.002028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.002383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.002390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.002773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.003002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.003009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.003375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.003789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.003795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.004123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.004385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.004392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.004778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.005143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.005150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.005418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.005689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.005695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.006101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.006487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.006494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.006850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.007224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.007230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.007569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.007848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.007854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.008094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.008520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.008527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.008872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.009230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.009236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.009493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.009869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.009876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.010260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.010606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.010613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.010960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.011319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.011326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.011698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.012063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.012070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.012415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.012773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.012780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.012927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.013296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.013302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.013652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.013854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.013861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.014240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.014574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.014581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.014925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.015294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.015301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.015686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.016075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.929 [2024-05-15 19:46:41.016082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.929 qpair failed and we were unable to recover it. 00:31:14.929 [2024-05-15 19:46:41.016354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.016720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.016727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.016995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.017357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.017364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.017832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.018188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.018194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.018625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.018977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.018984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.019373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.019849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.019855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.020244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.020569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.020576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.020826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.021187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.021193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.021560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.021903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.021910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.022279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.022530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.022537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.022888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.023167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.023174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.023553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.023904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.023910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.024260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.024596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.024603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.024869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.025263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.025269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.025627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.025981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.025987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.026326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.026668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.026675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.027026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.027250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.027257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.027587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.027981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.027987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.028334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.028697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.028704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.029056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.029440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.029447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.029807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.030174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.030180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.030534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.030772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.030779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.031174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.031566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.031573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.031937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.032191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.032198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.032557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.032940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.032947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.033389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.033745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.033751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.930 qpair failed and we were unable to recover it. 00:31:14.930 [2024-05-15 19:46:41.034106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.930 [2024-05-15 19:46:41.034481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.034487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.034856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.035216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.035222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.035581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.035963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.035971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.036333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.036698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.036704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.037058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.037356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.037362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.037738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.038057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.038063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.038392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.038752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.038759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.039026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.039405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.039411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.039744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.040132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.040138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.040483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.040736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.040742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.041105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.041407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.041415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.041784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.042133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.042139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.042485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.042861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.042867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.043240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.043612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.043618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.044003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.044362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.044368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.044736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.045128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.045134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.045480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.045830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.045842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.046230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.046563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.046569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.046927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.047299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.047309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.047654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.047990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.048002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.048447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.048780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.048786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.049149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.049549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.049555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.049914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.050272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.050279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.050653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.051009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.051016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.931 qpair failed and we were unable to recover it. 00:31:14.931 [2024-05-15 19:46:41.051390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.051721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.931 [2024-05-15 19:46:41.051734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.052104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.052468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.052474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.052840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.053230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.053237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.053694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.054039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.054045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.054401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.054670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.054678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.055068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.055461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.055468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.055887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.056279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.056286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.056547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.056785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.056791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.057143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.057507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.057513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.057875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.058198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.058211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.058609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.058882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.058889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.059243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.059592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.059599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.059964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.060322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.060329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.060698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.061028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.061034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.061302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.061733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.061741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.062099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.062294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.062301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.062674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.063029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.063036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.063383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.063773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.063779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.064167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.064415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.064422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.064735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.065117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.065123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.065492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.065877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.065883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.066229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.066574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.066580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.066733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.067178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.067184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.067612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.067960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.067966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.068329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.068485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.068493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.932 qpair failed and we were unable to recover it. 00:31:14.932 [2024-05-15 19:46:41.068837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.069161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.932 [2024-05-15 19:46:41.069168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.933 qpair failed and we were unable to recover it. 00:31:14.933 [2024-05-15 19:46:41.069557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.069885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.069891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.933 qpair failed and we were unable to recover it. 00:31:14.933 [2024-05-15 19:46:41.070153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.070575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.070581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.933 qpair failed and we were unable to recover it. 00:31:14.933 [2024-05-15 19:46:41.070943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.071336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.071343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.933 qpair failed and we were unable to recover it. 00:31:14.933 [2024-05-15 19:46:41.071723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.072153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.072159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.933 qpair failed and we were unable to recover it. 00:31:14.933 [2024-05-15 19:46:41.072528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.072911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.072918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.933 qpair failed and we were unable to recover it. 00:31:14.933 [2024-05-15 19:46:41.073260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.073508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.073515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.933 qpair failed and we were unable to recover it. 00:31:14.933 [2024-05-15 19:46:41.073784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.074137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.074143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.933 qpair failed and we were unable to recover it. 00:31:14.933 [2024-05-15 19:46:41.074369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.074717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.074724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.933 qpair failed and we were unable to recover it. 00:31:14.933 [2024-05-15 19:46:41.075090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.075447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.075455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.933 qpair failed and we were unable to recover it. 00:31:14.933 [2024-05-15 19:46:41.075840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.076020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.076026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.933 qpair failed and we were unable to recover it. 00:31:14.933 [2024-05-15 19:46:41.076393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.076785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.076791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.933 qpair failed and we were unable to recover it. 00:31:14.933 [2024-05-15 19:46:41.077079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.077439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.077446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.933 qpair failed and we were unable to recover it. 00:31:14.933 [2024-05-15 19:46:41.077871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.078260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.078267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.933 qpair failed and we were unable to recover it. 00:31:14.933 [2024-05-15 19:46:41.078579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.078933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.078940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.933 qpair failed and we were unable to recover it. 00:31:14.933 [2024-05-15 19:46:41.079380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.079701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.079708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.933 qpair failed and we were unable to recover it. 00:31:14.933 [2024-05-15 19:46:41.080074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.080435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.080442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.933 qpair failed and we were unable to recover it. 00:31:14.933 [2024-05-15 19:46:41.080787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.081144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.081150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.933 qpair failed and we were unable to recover it. 00:31:14.933 [2024-05-15 19:46:41.081492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.081833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.933 [2024-05-15 19:46:41.081839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.082188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.082541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.082549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.082865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.083039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.083045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.083465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.083854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.083860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.084124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.084475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.084481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.084930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.085273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.085279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.085474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.085799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.085805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.086238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.086396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.086404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.086761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.087112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.087118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.087516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.087874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.087880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.088217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.088574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.088580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.088953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.089197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.089203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.089593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.090000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.090007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.090356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.090704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.090711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.090960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.091316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.091323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.091579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.091947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.091953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.092292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.092681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.092688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.093043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.093283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.093290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.093654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.094014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.094020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.094393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.094785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.094792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.095144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.095531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.095537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.095798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.096143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.096150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:14.934 [2024-05-15 19:46:41.096508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.096866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.934 [2024-05-15 19:46:41.096872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:14.934 qpair failed and we were unable to recover it. 00:31:15.204 [2024-05-15 19:46:41.097220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.204 [2024-05-15 19:46:41.097463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.097470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.097860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.098209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.098216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.098590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.098969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.098975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.099347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.099719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.099726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.100090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.100490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.100497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.100891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.101246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.101252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.101624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.101980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.101986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.102368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.102714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.102720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.102948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.103328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.103334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.103606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.103984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.103990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.104339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.104681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.104687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.105038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.105395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.105402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.105776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.106146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.106152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.106589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.106939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.106945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.107337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.107675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.107681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.108060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.108347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.108353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.108727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.109105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.109111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.109442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.109731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.109737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.110087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.110437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.110444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.110828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.111192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.111199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.111566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.111957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.111964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.112332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.112670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.112676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.113027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.113273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.113279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.113655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.114005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.114012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.114380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.114657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.114664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.115012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.115361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.115368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.115645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.116025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.116031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.116295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.116702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.116708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.117061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.117433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.117439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.205 qpair failed and we were unable to recover it. 00:31:15.205 [2024-05-15 19:46:41.117890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.205 [2024-05-15 19:46:41.118275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.118282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.118655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.119049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.119056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.119422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.119729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.119735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.120098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.120521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.120527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.120882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.121223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.121230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.121558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.121988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.121994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.122385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.122762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.122769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.123117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.123479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.123486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.123860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.124232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.124238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.124574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.124926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.124932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.125299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.125703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.125710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.125974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.126328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.126334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.126529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.126838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.126845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.127192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.127558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.127565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.127898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.128287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.128294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.128562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.128908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.128915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.129280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.129620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.129626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.130013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.130362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.130368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.130731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.131080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.131087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.131370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.131734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.131741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.132160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.132557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.132564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.132809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.133183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.133189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.133587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.133925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.133932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.134321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.134705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.134712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.135081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.135458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.135465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.135851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.136200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.136206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.136562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.136857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.136863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.137010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.137354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.137361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.137745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.138145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.138151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.138393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.138765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.206 [2024-05-15 19:46:41.138772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.206 qpair failed and we were unable to recover it. 00:31:15.206 [2024-05-15 19:46:41.139116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.139467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.139474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.139826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.140205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.140211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.140547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.140918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.140924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.141324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.141690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.141697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.142086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.142492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.142498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.142883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.143275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.143281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.143611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.144004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.144011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.144356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.144730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.144737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.145102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.145452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.145459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.145810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.146180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.146186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.146580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.146971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.146977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.147305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.147680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.147686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.148031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.148538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.148565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.148765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.149021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.149028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.149356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.149739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.149746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.150152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.150551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.150557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.150935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.151294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.151300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.151565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.151948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.151954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.152309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.152658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.152665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.152895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.153154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.153161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.153518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.153875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.153885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.154275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.154628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.154635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.154977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.155394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.155400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.155732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.156111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.156117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.156450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.156848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.156854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.157201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.157547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.157554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.157921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.158276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.158283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.158643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.159032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.159040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.159411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.159801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.159807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.207 qpair failed and we were unable to recover it. 00:31:15.207 [2024-05-15 19:46:41.160155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.160542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.207 [2024-05-15 19:46:41.160549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.160897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.161272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.161279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.161519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.161874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.161881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.162233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.162582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.162589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.162985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.163334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.163341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.163724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.164094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.164100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.164487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.164840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.164847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.165211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.165582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.165588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.165975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.166378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.166385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.166735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.167134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.167140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.167494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.167871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.167877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.168229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.168582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.168591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.168939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.169309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.169323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.169713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.170107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.170113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.170551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.170995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.171005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.171353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.171686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.171692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.171952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.172220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.172227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.172492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.172858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.172865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.173130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.173536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.173542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.173935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.174287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.174293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.174601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.174973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.174979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.175334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.175707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.175717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.175980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.176328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.176335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.176537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.176868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.176874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.177203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.177585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.177592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.177937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.178301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.178307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.178661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.179016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.179022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.179394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.179783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.179789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.180170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.180562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.180568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.180987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.181333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.181339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.208 qpair failed and we were unable to recover it. 00:31:15.208 [2024-05-15 19:46:41.181738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.208 [2024-05-15 19:46:41.182067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.182073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.182250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.182610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.182616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.182815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.183089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.183096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.183421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.183798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.183805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.184086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.184481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.184488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.184858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.185058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.185065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.185468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.185833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.185839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.186237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.186557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.186564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.186956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.187323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.187331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.187672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.187862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.187869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.188107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.188480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.188487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.188879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.189229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.189235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.189553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.189922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.189928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.190225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.190591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.190598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.191031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.191321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.191328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.191683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.192081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.192088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.192482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.192852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.192858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.193204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.193547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.193555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.193896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.194289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.194296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.194665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.195053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.195060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.195143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.195497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.195504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.195840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.196217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.196223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.196583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.196908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.209 [2024-05-15 19:46:41.196914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.209 qpair failed and we were unable to recover it. 00:31:15.209 [2024-05-15 19:46:41.197258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.197618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.197625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.197998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.198359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.198366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.198729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.199009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.199016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.199397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.199795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.199801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.200138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.200469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.200476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.200843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.201192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.201198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.201395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.201746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.201752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.202137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.202465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.202471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.202813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.203200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.203206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.203585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.203937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.203943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.204317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.204629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.204635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.204994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.205391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.205398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.205635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.205967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.205973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.206303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.206701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.206708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.207076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.207440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.207446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.207710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.207925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.207932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.208302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.208692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.208699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.209085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.209438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.209444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.209816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.210197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.210203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.210591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.210957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.210963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.211307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.211688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.211695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.212047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.212440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.212447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.212811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.212976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.212982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.213327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.213672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.213678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.214002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.214376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.214382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.214754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.215102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.215108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.215475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.215863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.215869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.216221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.216576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.216583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.216893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.217247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.210 [2024-05-15 19:46:41.217253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.210 qpair failed and we were unable to recover it. 00:31:15.210 [2024-05-15 19:46:41.217514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.217868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.217876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.218223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.218620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.218627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.218975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.219302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.219309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.219575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.219961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.219969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.220337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.220588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.220595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.220942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.221109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.221115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.221485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.221688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.221695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.222061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.222491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.222497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.222895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.223246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.223252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.223655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.223960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.223966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.224312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.224674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.224681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.225060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.225426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.225433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.225793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.226096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.226103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.226470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.226797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.226804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.227186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.227580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.227587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.227811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.228162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.228169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.228518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.228918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.228924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.229294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.229688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.229694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.230082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.230483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.230490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.230878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.231232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.231238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.231594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.231994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.232001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.232366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.232732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.232738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.233090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.233488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.233495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.233775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.234122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.234129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.234507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.234799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.234806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.235203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.235622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.235629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.235982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.236354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.236361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.236733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.236976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.236983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.211 [2024-05-15 19:46:41.237352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.237633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.211 [2024-05-15 19:46:41.237640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.211 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.238029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.238421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.238428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.238784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.239175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.239182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.239535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.239819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.239826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.240053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.240466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.240473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.240762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.241131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.241137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.241529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.241891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.241898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.242271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.242667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.242674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.243019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.243378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.243385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.243743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.244134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.244141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.244488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.244815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.244822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.245184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.245541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.245548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.245732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.246129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.246136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.246441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.246694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.246700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.247054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.247344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.247350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.247628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.247931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.247938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.248307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.248658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.248665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.249015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.249391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.249398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.249587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.249946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.249952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.250352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.250704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.250710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.250977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.251295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.251301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.251631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.252020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.252027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.252418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.252778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.252785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.253162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.253534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.253541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.253720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.254050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.254056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.254458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.254704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.254711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.255155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.255511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.255518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.255880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.256239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.256246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.256619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.256995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.257001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.212 qpair failed and we were unable to recover it. 00:31:15.212 [2024-05-15 19:46:41.257392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.257752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.212 [2024-05-15 19:46:41.257758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.258158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.258525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.258531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.258885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.259280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.259288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.259649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.259742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.259749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.259968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.260331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.260338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.260680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.261037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.261043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.261368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.261730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.261736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.261917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.262241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.262248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.262600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.262990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.262997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.263350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.263711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.263718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.263938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.264291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.264297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.264559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.264956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.264963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.265319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.265725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.265732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.266121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.266532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.266542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.266881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.267251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.267258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.267450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.267790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.267798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.268166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.268411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.268419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.268694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.269094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.269100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.269451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.269873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.269880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.270230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.270575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.270582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.270939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.271293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.271299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.271637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.272024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.272030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.272374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.272620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.272627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.273019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.273408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.273416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.273767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.274140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.274146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.274497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.274753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.274760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.275151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.275540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.275549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.213 qpair failed and we were unable to recover it. 00:31:15.213 [2024-05-15 19:46:41.275808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.276204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.213 [2024-05-15 19:46:41.276211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.276588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.276972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.276978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.277382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.277778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.277785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.278136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.278537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.278544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.278911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.279213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.279225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.279653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.280015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.280021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.280378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.280742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.280751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.281110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.281387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.281393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.281738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.282100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.282106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.282525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.282876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.282882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.283315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.283676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.283683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.284073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.284441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.284448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.284784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.285146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.285153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.285517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.285899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.285906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.286307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.286688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.286695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.287067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.287528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.287555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.287932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.288301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.288317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.288685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.289094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.289101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.289601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.289898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.289907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.290264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.290466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.290474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.290849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.291128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.291135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.291449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.291804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.291811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.292033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.292440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.292448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.292709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.293061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.293068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.293426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.293872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.293879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.294240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.294503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.294511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.214 qpair failed and we were unable to recover it. 00:31:15.214 [2024-05-15 19:46:41.294912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.295311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.214 [2024-05-15 19:46:41.295324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.295575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.295928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.295935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.296294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.296618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.296625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.297016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.297375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.297383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.297789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.298049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.298056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.298427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.298800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.298806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.299174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.299533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.299539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.299902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.300230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.300237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.300594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.300953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.300959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.301286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.301667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.301674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.302112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.302561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.302588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.302990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.303195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.303204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.303488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.303895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.303901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.304180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.304555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.304561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.304911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.305273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.305280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.305666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.306060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.306067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.306408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.306770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.306776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.307149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.307521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.307527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.307873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.308223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.308229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.308406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.308667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.308674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.308917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.309160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.309167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.309529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.309797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.309803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.310175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.310627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.310633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.310980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.311333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.311340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.311646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.312050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.312056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.312402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.312791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.312798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.313147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.313491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.313498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.313823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.314212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.314218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.215 [2024-05-15 19:46:41.314578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.314942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.215 [2024-05-15 19:46:41.314948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.215 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.315340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.315710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.315716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.316057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.316440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.316447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.316795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.316920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.316927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.317268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.317635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.317642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.317897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.318258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.318265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.318546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.318747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.318754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.319137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.319557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.319564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.319837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.320074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.320080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.320347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.320756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.320762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.321116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.321374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.321381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.321874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.322247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.322253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.322503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.322883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.322890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.323242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.323585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.323592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.323961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.324319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.324325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.324682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.325064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.325070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.325430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.325676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.325682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.326016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.326378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.326385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.326570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.326829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.326836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.327129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.327526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.327532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.327925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.328276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.328282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.328707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.329067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.329074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.329523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.329872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.329879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.330303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.330583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.330590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.330939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.331298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.331304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.331684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.332063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.332069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.332539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.333010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.333020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.216 qpair failed and we were unable to recover it. 00:31:15.216 [2024-05-15 19:46:41.333413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.216 [2024-05-15 19:46:41.333672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.333679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.333930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.334175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.334182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.334388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.334751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.334757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.335106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.335449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.335456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.335828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.336086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.336092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.336447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.336747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.336753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.337008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.337282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.337288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.337480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.337828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.337834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.338200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.338490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.338497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.338685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.339020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.339027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.339278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.339535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.339542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.339809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.340175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.340182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.340346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.340711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.340717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.341086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.341461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.341468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.341827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.342007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.342014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.342445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.342730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.342736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.343096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.343475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.343481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.343783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.344145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.344151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.344420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.344671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.344678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.345053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.345540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.345547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.345904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.346269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.346275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.346522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.346951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.346958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.347280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.347472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.347479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.347799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.348152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.348159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.348513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.348873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.348879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.349256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.349600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.349608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.349816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.350176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.350183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.350600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.350970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.350977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-05-15 19:46:41.351233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.217 [2024-05-15 19:46:41.351596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.351603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.351739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.352094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.352101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.352472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.352546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.352553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.352925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.353293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.353299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.353781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.354016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.354022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.354426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.354742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.354749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.355147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.355504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.355511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.355868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.356196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.356202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.356587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.356830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.356836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.357201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.357541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.357547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.357912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.358307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.358318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.358674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.359004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.359010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.359400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.359734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.359740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.360127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.360524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.360530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.360715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.361129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.361136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.361497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.361860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.361866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.362207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.362586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.362593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.362940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.363299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.363305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.363586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.363864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.363871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.364247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.364608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.364614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.364963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.365339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.365346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.365628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.366014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.366020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.366392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.366810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.366816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.367206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.367629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.367636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-05-15 19:46:41.367990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.218 [2024-05-15 19:46:41.368340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.368347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-05-15 19:46:41.368743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.369054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.369061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-05-15 19:46:41.369347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.369738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.369746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-05-15 19:46:41.369819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.370160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.370166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-05-15 19:46:41.370434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.370824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.370831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-05-15 19:46:41.371206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.371464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.371471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-05-15 19:46:41.371805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.372150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.372157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-05-15 19:46:41.372568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.372866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.372872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-05-15 19:46:41.373067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.373449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.373456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-05-15 19:46:41.373719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.374074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.374080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-05-15 19:46:41.374424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.374852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.374858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-05-15 19:46:41.375188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.375513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.375520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-05-15 19:46:41.375877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.376149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.376155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-05-15 19:46:41.376551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.376959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.376966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-05-15 19:46:41.377245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.377606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.377614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-05-15 19:46:41.377992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.378236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.378242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-05-15 19:46:41.378713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.379059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.379065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-05-15 19:46:41.379339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.379684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.379691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-05-15 19:46:41.380033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.380400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.219 [2024-05-15 19:46:41.380407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.489 [2024-05-15 19:46:41.380787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.381153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.381159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.489 qpair failed and we were unable to recover it. 00:31:15.489 [2024-05-15 19:46:41.381523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.381694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.381700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.489 qpair failed and we were unable to recover it. 00:31:15.489 [2024-05-15 19:46:41.382077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.382371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.382378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.489 qpair failed and we were unable to recover it. 00:31:15.489 [2024-05-15 19:46:41.382660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.383007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.383014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.489 qpair failed and we were unable to recover it. 00:31:15.489 [2024-05-15 19:46:41.383283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.383723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.383730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.489 qpair failed and we were unable to recover it. 00:31:15.489 [2024-05-15 19:46:41.384113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.384382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.384391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.489 qpair failed and we were unable to recover it. 00:31:15.489 [2024-05-15 19:46:41.384621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.384930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.384937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.489 qpair failed and we were unable to recover it. 00:31:15.489 [2024-05-15 19:46:41.385301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.385670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.385676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.489 qpair failed and we were unable to recover it. 00:31:15.489 [2024-05-15 19:46:41.386024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.386361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.386368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.489 qpair failed and we were unable to recover it. 00:31:15.489 [2024-05-15 19:46:41.386526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.386887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.386893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.489 qpair failed and we were unable to recover it. 00:31:15.489 [2024-05-15 19:46:41.387248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.387326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.387333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.489 qpair failed and we were unable to recover it. 00:31:15.489 [2024-05-15 19:46:41.387499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.387858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.387865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.489 qpair failed and we were unable to recover it. 00:31:15.489 [2024-05-15 19:46:41.388217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.388484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.489 [2024-05-15 19:46:41.388491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.489 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.388735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.389104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.389111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.389496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.389869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.389875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.390108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.390335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.390344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.390688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.391032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.391038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.391385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.391653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.391660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.392023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.392418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.392425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.392795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.393162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.393169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.393429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.393812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.393818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.394071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.394434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.394440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.394833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.395228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.395234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.395602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.395997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.396003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.396377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.396750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.396756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.397182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.397606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.397614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.398004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.398365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.398372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.398708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.399099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.399105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.399409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.399777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.399783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.400156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.400503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.400510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.400860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.401216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.401222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.401411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.401852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.401858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.402204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.402454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.402461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.402905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.403212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.403218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.403588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.403966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.403972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.404324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.404689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.404696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.404750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.405164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.405171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.405582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.405940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.405946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.406259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.406628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.406635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.406900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.407350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.407357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.407725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.408060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.408067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.408391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.408726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.408732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.490 qpair failed and we were unable to recover it. 00:31:15.490 [2024-05-15 19:46:41.409103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.490 [2024-05-15 19:46:41.409458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.409465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.409887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.410246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.410252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.410609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.410841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.410847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.411204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.411448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.411455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.411846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.412254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.412261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.412468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.412720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.412726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.413095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.413535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.413542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.413886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.414229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.414244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.414497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.414706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.414712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.415022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.415410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.415417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.415792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.416176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.416182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.416457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.416773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.416779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.416944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.417238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.417245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.417634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.417991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.417997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.418391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.418724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.418730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.419087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.419320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.419328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.419730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.420133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.420140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.420515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.420861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.420868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.421136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.421526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.421533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.421856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.422096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.422103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.422403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.422786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.422792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.423173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.423519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.423526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.423898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.424246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.424252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.424604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.424982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.424988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.425383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.425747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.425753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.426124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.426410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.426416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.426644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.427020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.427026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.427374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.427747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.427753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.428107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.428407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.428414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.491 [2024-05-15 19:46:41.428785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.428972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.491 [2024-05-15 19:46:41.428979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.491 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.429341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.429730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.429736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.430157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.430510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.430518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.430880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.431234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.431240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.431606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.431882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.431888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.432240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.432524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.432531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.432903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.433119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.433125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.433389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.433839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.433845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.434192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.434502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.434508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.434765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.435145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.435152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.435349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.435737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.435743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.436014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.436414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.436420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.436616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.436995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.437001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.437356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.437669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.437675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.438029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.438295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.438301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.438715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.439064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.439070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.439462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.439632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.439638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.439816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.440068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.440074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.440406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.440676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.440683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.441025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.441329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.441336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.441704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.442054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.442060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.442349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.442730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.442736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.443088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.443340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.443347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.443756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.444129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.444135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.444553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.444825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.444831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.444999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.445341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.445348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.445774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.446033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.446039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.446404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.446764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.446771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.447132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.447466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.447473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.447742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.448125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.448131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.492 qpair failed and we were unable to recover it. 00:31:15.492 [2024-05-15 19:46:41.448379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.492 [2024-05-15 19:46:41.448776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.448782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.449108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.449458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.449464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.449736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.450073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.450079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.450272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.450524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.450530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.450923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.451167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.451174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.451616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.451938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.451945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.452346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.452658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.452665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.453057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.453414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.453421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.453670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.454001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.454007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.454461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.454844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.454850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.455241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.455620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.455627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.455873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.456202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.456209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.456575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.456925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.456932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.457281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.457647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.457654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.458061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.458339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.458347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.458717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.459058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.459064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.459430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.459820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.459826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.460161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.460537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.460544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.460944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.461290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.461297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.461676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.462018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.462024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.462363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.462732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.462739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.463027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.463371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.463378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.463799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.464160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.464167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.464574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.464899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.464907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.465218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.465577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.465584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.465935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.466291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.466297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.493 qpair failed and we were unable to recover it. 00:31:15.493 [2024-05-15 19:46:41.466462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.493 [2024-05-15 19:46:41.466804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.466811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.467175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.467506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.467513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.467790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.468124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.468130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.468491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.468824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.468836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.469202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.469625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.469632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.469989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.470245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.470252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.470595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.470950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.470956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.471344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.471581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.471587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.471959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.472362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.472369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.472672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.473056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.473063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.473433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.473627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.473635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.474062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.474464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.474471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.474814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.475191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.475198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.475457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.475856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.475863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.476206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.476460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.476467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.476824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.477182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.477188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.477565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.477959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.477965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.478332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.478703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.478709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.478955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.479319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.479326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.479680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.480034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.480043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.480418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.480650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.480657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.481049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.481402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.481409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.481804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.481958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.481965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.482325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.482618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.482625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.482989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.483344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.483351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.483692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.484015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.484021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.484464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.484859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.484866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.494 qpair failed and we were unable to recover it. 00:31:15.494 [2024-05-15 19:46:41.485265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.485558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.494 [2024-05-15 19:46:41.485565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.485904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.486262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.486269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.486517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.486910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.486931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.487302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.487582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.487590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.487987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.488391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.488398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.488609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.488957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.488964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.489342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.489729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.489736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.490128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.490456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.490462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.490828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.491191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.491198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.491547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.491937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.491943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.492291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.492686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.492692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.493048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.493339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.493347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.493690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.494081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.494090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.494478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.494853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.494860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.495251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.495613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.495619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.495843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.496083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.496091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.496454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.496842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.496848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.497025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.497363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.497370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.497733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.497955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.497962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.498329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.498687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.498693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.499085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.499472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.499478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.499834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.500192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.500199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.500448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.500833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.500841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.501177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.501410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.501418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.501824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.502216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.502222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.502586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.502938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.502945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.503331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.503645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.503651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.504012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.504252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.504259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.504445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.504742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.504749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.505094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.505447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.505454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.495 qpair failed and we were unable to recover it. 00:31:15.495 [2024-05-15 19:46:41.505811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.495 [2024-05-15 19:46:41.505967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.505974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.506319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.506695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.506702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.507110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.507510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.507517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.507846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.508266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.508273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.508642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.508967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.508974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.509319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.509570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.509576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.509933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.510330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.510338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.510707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.511059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.511065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.511408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.511785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.511792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.512167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.512534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.512541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.512958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.513302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.513308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.513734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.513924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.513931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.514311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.514680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.514686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.515024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.515401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.515408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.515801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.516193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.516199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.516629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.517027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.517034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.517420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.517819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.517826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.518176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.518424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.518430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.518789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.519142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.519149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.519514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.519743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.519757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.520158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.520562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.520569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.520891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.521281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.521288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.521657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.521929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.521936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.522284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.522641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.522648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.523018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.523385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.523392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.523744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.523971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.523977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.524344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.524703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.524709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.525103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.525493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.525499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.525853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.526235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.526242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.526635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.527011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.496 [2024-05-15 19:46:41.527018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.496 qpair failed and we were unable to recover it. 00:31:15.496 [2024-05-15 19:46:41.527368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.527693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.527700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.527967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.528361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.528375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.529173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.529535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.529544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.529937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.530257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.530264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.530624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.530974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.530981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.531381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.531733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.531739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.532131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.532510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.532516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.532886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.533280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.533286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.533676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.534032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.534038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.534406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.534777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.534783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.535161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.535522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.535529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.535886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.536240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.536247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.536620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.537024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.537032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.537403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.537758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.537765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.538197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.538556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.538563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.538962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.539204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.539211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.539588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.539938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.539945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.540336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.540579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.540585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.540970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.541323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.541330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.541709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.542066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.542073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.542448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.542815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.542821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.543201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.543482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.543489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.543841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.544199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.544206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.544567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.544939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.544946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.545296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.545649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.545656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.545974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.546157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.546164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.546592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.546999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.547007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.547372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.547689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.547696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.548082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.548432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.548439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.497 qpair failed and we were unable to recover it. 00:31:15.497 [2024-05-15 19:46:41.548786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.497 [2024-05-15 19:46:41.549174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.549181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.549526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.549917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.549923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.550195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.550621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.550628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.550978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.551344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.551352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.551615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.552012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.552018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.552286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.552656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.552662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.553006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.553528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.553555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.553915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.554280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.554288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.554660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.555010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.555017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.555386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.555623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.555631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.555953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.556327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.556333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.556691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.557083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.557089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.557440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.557835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.557841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.558222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.558558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.558564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.558887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.559152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.559158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.559556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.559910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.559916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.560302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.560654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.560661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.560852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.561245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.561253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.561566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.561940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.561946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.562274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.562628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.562635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.562972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.563377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.563383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.563719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.564101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.564107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.564505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.564863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.564869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.565232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.565600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.565607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.565958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.566126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.566134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.498 qpair failed and we were unable to recover it. 00:31:15.498 [2024-05-15 19:46:41.566459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.566813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.498 [2024-05-15 19:46:41.566820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.567165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.567525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.567532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.567922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.568301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.568307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.568654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.568970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.568977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.569349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.569701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.569707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.570039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.570428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.570435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.570784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.571020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.571027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.571387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.571778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.571784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.572031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.572388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.572394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.572787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.573148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.573155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.573523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.573867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.573874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.574022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.574403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.574410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.574759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.575143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.575150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.575517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.575873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.575879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.576226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.576635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.576642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.576979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.577342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.577349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.577727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.578084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.578090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.578528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.578884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.578890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.579241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.579566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.579573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.579966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.580325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.580332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.580684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.580967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.580973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.581299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.581694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.581701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.582054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.582409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.582416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.582793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.583175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.583182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.583532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.583920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.583928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.584292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.584496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.584504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.584877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.585274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.585281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.585658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.585942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.585949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.586325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.586679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.586686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.587031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.587387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.499 [2024-05-15 19:46:41.587394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.499 qpair failed and we were unable to recover it. 00:31:15.499 [2024-05-15 19:46:41.587765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.588135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.588142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.588490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.588925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.588931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.589297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.589548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.589556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.589922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.590273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.590280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.590669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.591020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.591026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.591418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.591783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.591789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.591977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.592311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.592322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.592702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.592921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.592928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.593393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.593734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.593741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.594118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.594463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.594471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.594815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.595186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.595192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.595540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.595913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.595920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.596265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.596625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.596632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.597011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.597398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.597405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.597706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.598091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.598097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.598285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.598608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.598615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.598853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.599227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.599233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.599591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.599945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.599951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.600324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.600626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.600633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.601034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.601355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.601363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.601739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.602074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.602080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.602498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.602814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.602821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.603179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.603446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.603454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.603648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.603984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.603992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.604377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.604707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.604714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.605108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.605474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.605481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.605904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.606254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.606261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.606612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.606945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.606957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.607332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.607673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.607679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.500 [2024-05-15 19:46:41.608076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.608468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.500 [2024-05-15 19:46:41.608476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.500 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.608856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.609212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.609218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.609557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.609611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.609618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.609981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.610366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.610373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.610726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.611079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.611086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.611451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.611631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.611638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.612014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.612412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.612419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.612686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.613146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.613153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.613529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.613882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.613888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.614260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.614612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.614618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.614876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.615254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.615262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.615621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.616021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.616027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.616299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.616687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.616694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.617081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.617435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.617442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.617627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.617993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.618000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.618348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.618712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.618718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.619066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.619323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.619329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.619582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.619978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.619986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.620437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.620782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.620789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.621184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.621497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.621505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.621779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.622165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.622171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.622560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.622799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.622806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.623160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.623517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.623524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.623717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.624072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.624079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.624499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.624850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.624857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.625255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.625613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.625620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.625986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.626384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.626390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.626758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.627125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.627132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.627484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.627844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.627851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.628217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.628587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.628593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.501 qpair failed and we were unable to recover it. 00:31:15.501 [2024-05-15 19:46:41.629047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.629397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.501 [2024-05-15 19:46:41.629404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.629767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.630067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.630074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.630445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.630801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.630807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.631180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.631526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.631532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.631883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.632288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.632295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.632675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.633074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.633081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.633542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.633930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.633939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.634287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.634650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.634657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.635029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.635368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.635375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.635737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.636082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.636088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.636439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.636778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.636785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.637136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.637493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.637500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.637849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.638174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.638180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.638541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.638892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.638898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.639281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.639514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.639522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.639870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.640185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.640193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.640579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.640929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.640935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.641288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.641646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.641653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.642003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.642370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.642378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.642764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.643163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.643170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.643455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.643828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.643835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.644218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.644629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.644636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.645042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.645410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.645418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.645803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.646188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.646195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.646532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.646668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.646676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.502 qpair failed and we were unable to recover it. 00:31:15.502 [2024-05-15 19:46:41.647058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.502 [2024-05-15 19:46:41.647411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.647419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.647800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.648074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.648082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.648441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.648732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.648739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.649125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.649369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.649377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.649768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.650028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.650035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.650411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.650680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.650688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.651063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.651446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.651453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.651841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.652225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.652232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.652704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.653004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.653011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.653368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.653817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.653823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.653909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.654289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.654295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.654582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.654929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.654936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.655296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.655675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.655681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.656129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.656463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.656470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.656849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.657208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.657214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.657633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.657875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.657882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.658061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.658422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.658428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.658478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.658839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.658846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.659121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.659488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.659495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.659863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.660225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.660231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.660403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.660821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.660827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.661254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.661539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.661545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.661904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.662257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.662263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.662528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.662866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.662873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.663233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.663605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.663611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.663966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.664368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.664374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.664564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.664965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.503 [2024-05-15 19:46:41.664971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.503 qpair failed and we were unable to recover it. 00:31:15.503 [2024-05-15 19:46:41.665320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.665589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.665598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.665993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.666261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.666268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.666583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.666929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.666935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.667268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.667623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.667631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.667977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.668153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.668160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.668537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.668898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.668904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.669257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.669602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.669608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.669782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.670169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.670176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.670565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.670936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.670943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.671294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.671734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.671741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.672101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.672626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.672654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.673029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.673593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.673620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.673976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.674441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.674449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.674853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.675190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.675197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.675580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.675931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.675939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.676215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.676405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.676413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.676849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.677210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.677216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.677459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.677789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.677796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.678135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.678389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.678396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.678777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.679125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.679131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.679532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.679778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.679786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.680124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.680451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.680457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.773 [2024-05-15 19:46:41.680789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.681163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.773 [2024-05-15 19:46:41.681169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.773 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.681529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.681793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.681800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.682060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.682415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.682422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.682797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.683064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.683070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.683356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.683750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.683757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.684011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.684393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.684400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.684773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.685101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.685107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.685478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.685827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.685834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.686215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.686582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.686589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.686928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.687301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.687308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.687681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.687929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.687935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.688240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.688583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.688590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.688948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.689110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.689117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.689489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.689767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.689773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.690105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.690444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.690451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.690801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.691197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.691205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.691474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.691851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.691858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.692223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.692586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.692593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.692942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.693236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.693243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.693534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.693885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.693892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.694284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.694515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.694522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.694900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.695267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.695273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.695617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.695975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.695981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.696324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.696571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.696578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.696960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.697310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.697320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.697679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.698057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.698063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.698419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.698828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.698834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.699189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.699541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.699548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.699901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.700099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.700107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.700486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.700869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.700875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.774 qpair failed and we were unable to recover it. 00:31:15.774 [2024-05-15 19:46:41.701245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.774 [2024-05-15 19:46:41.701593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.701599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.701949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.702360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.702366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.702726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.702965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.702971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.703355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.703743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.703750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.704117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.704458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.704465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.704825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.705205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.705211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.705505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.705874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.705880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.706227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.706574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.706583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.706974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.707322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.707329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.707690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.707970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.707977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.708377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.708659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.708666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.709065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.709473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.709480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.709809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.710163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.710169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.710447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.710711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.710717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.710987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.711434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.711441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.711859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.712131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.712137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.712505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.712928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.712934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.713287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.713624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.713632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.713997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.714305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.714311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.714681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.715073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.715080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.715539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.715934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.715944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.716344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.716785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.716792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.717082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.717303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.717310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.717589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.717961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.717967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.718388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.718685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.718692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.718886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.719189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.719195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.719595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.719949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.719956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.720320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.720638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.720648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.721043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.721380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.721387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.775 [2024-05-15 19:46:41.721810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.722168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.775 [2024-05-15 19:46:41.722174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.775 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.722546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.722918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.722931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.723298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.723665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.723672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.724020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.724276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.724282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.724665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.724969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.724975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.725319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.725626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.725633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.725901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.726288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.726295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.726552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.726816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.726823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.727055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.727454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.727460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.727824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.727989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.727995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.728332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.728697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.728703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.729045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.729298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.729304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.729675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.729868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.729875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.730195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.730476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.730483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.730882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.731151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.731158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.731536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.731907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.731913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.732270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.732626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.732633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.732951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.733106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.733113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.733379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.733707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.733713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.734023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.734332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.734340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.734701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.735058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.735065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.735434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.735810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.735817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.736191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.736622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.736630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.737006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.737366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.737374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.737799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.738127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.738134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.738526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.738734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.738742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.739106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.739485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.739492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.739845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.740120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.740127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.740358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.740730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.740736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.741161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.741488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.741494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.776 qpair failed and we were unable to recover it. 00:31:15.776 [2024-05-15 19:46:41.741919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.776 [2024-05-15 19:46:41.742255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.742261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.742595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.742942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.742948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.743299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.743674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.743680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.744062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.744303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.744309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.744572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.744813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.744819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.745089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.745491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.745497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.745862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.746223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.746229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.746592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.746988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.746994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.747253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.747661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.747668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.748055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.748415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.748422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.748652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.749031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.749037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.749384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.749688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.749695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.750081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.750438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.750445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.750729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.751072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.751078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.751372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.751734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.751742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.752122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.752484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.752491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.752847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.753136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.753142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.753511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.753728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.753734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.754062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.754395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.754401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.754643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.754850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.754857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.755277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.755615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.755622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.755965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.756293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.756299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.756680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.757070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.777 [2024-05-15 19:46:41.757078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.777 qpair failed and we were unable to recover it. 00:31:15.777 [2024-05-15 19:46:41.757276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.757531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.757538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.757879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.758231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.758238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.758666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.759007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.759014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.759259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.759499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.759506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.759766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.759906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.759912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.760305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.760655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.760662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.761006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.761246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.761252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.761658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.761903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.761910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.762275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.762602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.762609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.762847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.763215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.763222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.763609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.763980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.763986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.764341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.764753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.764760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.765103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.765367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.765374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.765690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.766034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.766041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.766385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.766723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.766730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.767070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.767424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.767430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.767656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.767979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.767986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.768264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.768616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.768623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.768992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.769244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.769251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.769680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.769989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.769996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.770360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.770419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.770426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.770821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.771161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.771167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.771542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.771894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.771901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.772237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.772703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.772710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.773062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.773418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.773425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.773797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.774232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.774238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.774547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.774936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.774942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.775303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.775573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.775580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.775754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.776190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.776196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.778 [2024-05-15 19:46:41.776571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.776818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.778 [2024-05-15 19:46:41.776824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.778 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.777195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.777475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.777482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.777825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.778190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.778197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.778674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.779032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.779039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.779521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.779920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.779929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.780293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.780641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.780649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.780930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.781290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.781296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.781562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.781937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.781944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.782300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.782596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.782603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.782941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.783308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.783318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.783670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.784048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.784055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.784512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.784825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.784831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.785193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.785660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.785688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.786137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.786530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.786557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.786956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.787354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.787362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.787582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.787841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.787848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.788261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.788644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.788651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.788815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.789114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.789121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.789586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.789960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.789966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.790410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.790791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.790797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.791048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.791447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.791454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.791657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.791996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.792002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.792442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.792814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.792820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.793084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.793405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.793413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.793789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.794162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.794169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.794452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.794716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.794722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.795084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.795332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.795339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.795687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.796043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.796049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.796415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.796817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.796823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.797173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.797497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.797504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.779 qpair failed and we were unable to recover it. 00:31:15.779 [2024-05-15 19:46:41.797864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.798223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.779 [2024-05-15 19:46:41.798229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.798601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.798885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.798892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.799242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.799555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.799562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.799914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.800246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.800253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.800606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.800930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.800937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.801323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.801683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.801690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.802044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.802395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.802402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.802777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.802984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.802992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.803359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.803809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.803816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.804167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.804509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.804517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.804871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.805219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.805226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.805586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.805951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.805957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.806323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.806700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.806706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.807082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.807425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.807432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.807814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.808165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.808172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.808404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.808793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.808799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.809151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.809463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.809469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.809851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.810121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.810130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.810388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.810789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.810795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.811146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.811451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.811457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.811690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.812089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.812096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.812496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.812949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.812955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.813215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.813576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.813583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.813938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.814270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.814277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.814546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.814928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.814934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.815298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.815670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.815677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.816019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.816370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.816377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.816626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.816826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.816835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.817116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.817404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.817412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.817794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.818152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.818158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.780 qpair failed and we were unable to recover it. 00:31:15.780 [2024-05-15 19:46:41.818475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.780 [2024-05-15 19:46:41.818855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.818861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.819158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.819418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.819424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.819824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.820114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.820121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.820497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.820853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.820859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.821257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.821504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.821511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.821789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.822207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.822213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.822492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.822863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.822869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.823307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.823687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.823695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.823953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.824274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.824280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.824538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.824801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.824807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.825150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.825552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.825558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.825891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.826249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.826256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.826629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.826795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.826801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.827147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.827468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.827475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.827722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.828110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.828117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.828454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.828834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.828841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.829123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.829496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.829503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.829933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.830283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.830291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.830607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.830972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.830978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.831326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.831415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.831423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.831671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.832056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.832062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.832411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.832799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.832806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.833149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.833522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.833529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.833875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.834214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.834220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.834596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.834991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.834997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.835366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.835697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.835704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.835971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.836358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.836365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.836658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.836998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.837004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.837378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.837749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.837755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.838094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.838321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.781 [2024-05-15 19:46:41.838329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.781 qpair failed and we were unable to recover it. 00:31:15.781 [2024-05-15 19:46:41.838638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.839004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.839011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.839293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.839547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.839553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.839907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.840141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.840148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.840445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.840795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.840802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.841065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.841295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.841301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.841672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.842029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.842036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.842380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.842667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.842673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.842938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.843324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.843331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.843519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.843898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.843905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.844251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.844617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.844624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.844976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.845205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.845212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.845484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.845839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.845845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.846201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.846408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.846416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.846678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.847055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.847061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.847400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.847761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.847767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.848145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.848482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.848488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.848741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.849125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.849131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.849505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.849776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.849782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.850155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.850547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.850554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.850898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.851228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.851235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.851597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.851998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.852004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.852352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.852664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.852670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.853063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.853266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.853273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.853719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.854132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.854139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.782 [2024-05-15 19:46:41.854511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.854878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.782 [2024-05-15 19:46:41.854884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.782 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.855232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.855597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.855603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.855882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.856106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.856112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.856433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.856833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.856839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.857189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.857487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.857502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.857750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.858004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.858010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.858398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.858808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.858815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.859167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.859390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.859397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.859798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.860217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.860224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.860549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.860904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.860911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.861228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.861636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.861643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.861912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.862158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.862164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.862441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.862641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.862647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.862913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.863286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.863292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.863540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.863928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.863934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.864246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.864605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.864612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.865006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.865365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.865372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.865747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.866109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.866116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.866489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.866864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.866870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.867150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.867535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.867541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.867891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.868229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.868236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.868600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.868994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.869000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.869191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.869573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.869579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.869935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.870262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.870269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.870647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.871017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.871024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.871372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.871763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.871769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.872086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.872405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.872412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.872792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.873180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.873187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.873462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.873896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.873902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.874261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.874347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.874355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.783 qpair failed and we were unable to recover it. 00:31:15.783 [2024-05-15 19:46:41.874727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.874999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.783 [2024-05-15 19:46:41.875006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 [2024-05-15 19:46:41.875391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.875657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.875663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 [2024-05-15 19:46:41.875810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.876171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.876177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 [2024-05-15 19:46:41.876589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.876915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.876923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3803246 Killed "${NVMF_APP[@]}" "$@" 00:31:15.784 [2024-05-15 19:46:41.877288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.877712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.877718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 19:46:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:31:15.784 [2024-05-15 19:46:41.877989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 19:46:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:15.784 [2024-05-15 19:46:41.878357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.878364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 19:46:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:15.784 [2024-05-15 19:46:41.878732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 19:46:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:15.784 19:46:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:15.784 [2024-05-15 19:46:41.879028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.879035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 [2024-05-15 19:46:41.879384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.879752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.879759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 [2024-05-15 19:46:41.880205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.880520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.880527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 [2024-05-15 19:46:41.880749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.880908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.880914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 [2024-05-15 19:46:41.881276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.881633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.881640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 [2024-05-15 19:46:41.882000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.882371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.882378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 [2024-05-15 19:46:41.882711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.883080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.883087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 [2024-05-15 19:46:41.883387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.883709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.883715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 [2024-05-15 19:46:41.884106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.884487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.884494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 [2024-05-15 19:46:41.884849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.885080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.885087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 [2024-05-15 19:46:41.885501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.885796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.885804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 [2024-05-15 19:46:41.886205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 19:46:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3804211 00:31:15.784 [2024-05-15 19:46:41.886514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.886524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 19:46:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3804211 00:31:15.784 [2024-05-15 19:46:41.886906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 19:46:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3804211 ']' 00:31:15.784 19:46:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:15.784 [2024-05-15 19:46:41.887309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.887328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 19:46:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.784 19:46:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:15.784 [2024-05-15 19:46:41.887692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 19:46:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.784 19:46:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:15.784 [2024-05-15 19:46:41.888101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.888110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 19:46:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:15.784 [2024-05-15 19:46:41.888312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.888721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.888729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 [2024-05-15 19:46:41.889126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.889530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.889557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 [2024-05-15 19:46:41.889825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.892073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.892090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 [2024-05-15 19:46:41.892273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.892912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.892930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 [2024-05-15 19:46:41.893365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.893728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.893735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.784 qpair failed and we were unable to recover it. 00:31:15.784 [2024-05-15 19:46:41.894078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.784 [2024-05-15 19:46:41.894522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.894529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.894895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.895259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.895265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.895640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.896005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.896012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.896320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.896697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.896704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.897099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.897304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.897318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.897705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.897914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.897921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.898328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.898636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.898643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.899059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.899404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.899413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.899657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.900043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.900049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.900637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.900992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.901000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.901471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.901716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.901724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.902135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.902479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.902486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.902943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.903223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.903231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.903550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.903903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.903911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.904326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.904715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.904723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.905114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.905380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.905387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.905583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.905909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.905916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.906311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.906583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.906591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.906933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.907289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.907297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.907672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.908067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.908074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.908445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.908840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.908848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.909212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.909623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.909630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.909977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.910342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.910350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.910703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.911082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.911088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.911405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.911532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.911539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.911921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.912327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.912334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.912509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.912877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.912883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.913094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.913334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.913341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.913755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.914107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.914113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.785 [2024-05-15 19:46:41.914360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.914664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.785 [2024-05-15 19:46:41.914671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.785 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.915065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.915471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.915478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.915775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.916033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.916039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.916403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.916648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.916655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.917010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.917178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.917185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.917363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.917789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.917796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.918085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.918471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.918478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.918569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.918890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.918897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.919289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.919661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.919668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.920019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.920403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.920411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.920763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.921116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.921122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.921476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.921867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.921873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.922269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.922435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.922442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.922617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.922967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.922974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.923322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.923567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.923573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.923958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.924216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.924229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.924611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.924969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.924975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.925368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.925633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.925639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.925914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.926193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.926201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.926420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.927287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.927303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.927545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.927909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.927917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.928288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.928518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.928525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.928848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.929243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.929249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.929659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.930092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.930098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.930300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.930736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.930743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.931046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.931217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.786 [2024-05-15 19:46:41.931223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.786 qpair failed and we were unable to recover it. 00:31:15.786 [2024-05-15 19:46:41.931603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.931756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.931763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.932202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.932453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.932460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.932818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.933014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.933021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.933295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.933502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.933509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.933857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.934246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.934252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.934538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.934940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.934947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.935185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.935469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.935477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.935864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.936233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.936240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.936486] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:31:15.787 [2024-05-15 19:46:41.936534] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:15.787 [2024-05-15 19:46:41.936579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.936900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.936909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.937282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.937347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.937355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.937626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.937931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.937938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.938252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.938619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.938626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.938888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.939211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.939218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.939272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.939540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.939548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.939945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.940050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.940058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.940419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.940703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.940710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.941100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.941377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.941384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.941761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.941851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.941858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.942262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.942472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.942481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.942866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.943223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.943230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.943479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.943745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.943752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.944032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.944277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.944284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.944733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.945088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.945096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.945460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.945728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.945736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.945992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.946296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.946304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.946591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.946909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.946916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.947343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.947709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.947716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.948101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.948352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.948360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.948640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.948801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.787 [2024-05-15 19:46:41.948808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:15.787 qpair failed and we were unable to recover it. 00:31:15.787 [2024-05-15 19:46:41.949145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.949470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.949479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.058 qpair failed and we were unable to recover it. 00:31:16.058 [2024-05-15 19:46:41.949703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.949953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.949960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.058 qpair failed and we were unable to recover it. 00:31:16.058 [2024-05-15 19:46:41.950322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.950554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.950562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.058 qpair failed and we were unable to recover it. 00:31:16.058 [2024-05-15 19:46:41.950930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.951240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.951247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.058 qpair failed and we were unable to recover it. 00:31:16.058 [2024-05-15 19:46:41.951624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.951863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.951870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.058 qpair failed and we were unable to recover it. 00:31:16.058 [2024-05-15 19:46:41.952090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.952416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.952423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.058 qpair failed and we were unable to recover it. 00:31:16.058 [2024-05-15 19:46:41.952796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.952954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.952962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.058 qpair failed and we were unable to recover it. 00:31:16.058 [2024-05-15 19:46:41.953280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.953599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.953607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.058 qpair failed and we were unable to recover it. 00:31:16.058 [2024-05-15 19:46:41.953845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.954068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.954075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.058 qpair failed and we were unable to recover it. 00:31:16.058 [2024-05-15 19:46:41.954455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.954910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.954917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.058 qpair failed and we were unable to recover it. 00:31:16.058 [2024-05-15 19:46:41.955295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.955662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.955669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.058 qpair failed and we were unable to recover it. 00:31:16.058 [2024-05-15 19:46:41.956083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.956490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.956497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.058 qpair failed and we were unable to recover it. 00:31:16.058 [2024-05-15 19:46:41.956846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.957201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.957207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.058 qpair failed and we were unable to recover it. 00:31:16.058 [2024-05-15 19:46:41.957502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.957691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.957699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.058 qpair failed and we were unable to recover it. 00:31:16.058 [2024-05-15 19:46:41.958045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.958403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.958409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.058 qpair failed and we were unable to recover it. 00:31:16.058 [2024-05-15 19:46:41.958715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.958948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.958954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.058 qpair failed and we were unable to recover it. 00:31:16.058 [2024-05-15 19:46:41.959446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.959708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.058 [2024-05-15 19:46:41.959716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.960040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.960459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.960465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.960649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.961040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.961047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.961397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.961846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.961852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.962056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.962490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.962497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.962850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.963274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.963280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.963455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.963746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.963752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.964151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.964514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.964521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.964798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.965187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.965194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.965568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.965807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.965814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.966153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.966431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.966438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.966670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.967059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.967066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.967446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.967823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.967830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.968234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.968515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.968522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.968795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.969152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.969159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.969469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.969859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.969866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.970212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.970562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.970569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.970943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.971301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.971308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.971574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.971978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.971985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.972326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.972739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.972746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.059 [2024-05-15 19:46:41.973025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.973448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.973455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.973822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.974057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.974064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.974484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.974842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.974848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.975240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.975605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.975613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.976008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.976375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.976382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.976661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.976996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.977002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.977262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.977529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.977536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.977754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.978142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.978149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.978534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.978958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.978965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.979303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.983332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.059 [2024-05-15 19:46:41.983356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.059 qpair failed and we were unable to recover it. 00:31:16.059 [2024-05-15 19:46:41.983697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.984069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.984080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.984512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.984784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.984798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.984999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.985361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.985372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.985762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.986141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.986158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.986479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.986898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.986913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.987188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.987614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.987630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.987915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.988159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.988173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.988495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.988899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.988910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.989268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.989640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.989657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.990087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.990504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.990519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.990913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.991130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.991144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.991534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.991950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.991967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.992311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.992670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.992680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.993068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.993443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.993459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.993879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.994251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.994266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.994699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.995010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.995025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.995414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.995893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.995909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.996125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.996541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.996557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.996895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.997304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.997324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.997692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.997963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.997977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.998389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.998642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.998660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.998844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.999127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.999141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:41.999561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.999971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:41.999990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:42.000331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:42.000701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:42.000712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:42.000890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:42.001320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:42.001340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:42.001723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:42.001973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:42.001988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:42.002395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:42.002716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:42.002730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:42.003088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:42.003498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:42.003515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:42.003908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:42.004331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:42.004348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.060 [2024-05-15 19:46:42.004759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:42.004963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.060 [2024-05-15 19:46:42.004976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.060 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.005403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.005807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.005823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.006095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.006542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.006562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.006915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.007335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.007350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.007736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.008098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.008110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.008511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.008813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.008825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.009177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.009561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.009573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.009855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.010249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.010260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.010641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.011055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.011066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.011256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.011625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.011636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.012002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.012371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.012383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.012758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.013177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.013190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.013511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:16.061 [2024-05-15 19:46:42.013606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.014036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.014049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.015320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.015571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.015585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.015959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.016377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.016391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.016775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.017150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.017162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.017570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.017873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.017885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.018133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.018415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.018427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.018817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.019099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.019113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.019549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.019902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.019914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.020238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.020631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.020644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.021043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.021458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.021471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.021877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.022277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.022289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.022666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.026321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.026341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.026723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.027105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.027117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.027520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.027939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.027951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.028324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.028694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.028707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.029106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.029495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.029509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.029789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.030201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.030213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.030609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.030979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.061 [2024-05-15 19:46:42.030991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.061 qpair failed and we were unable to recover it. 00:31:16.061 [2024-05-15 19:46:42.031357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.031733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.031745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.032143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.032548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.032561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.032801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.033059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.033072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.033480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.033897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.033910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.034273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.034687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.034700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.034924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.035311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.035330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.035671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.036102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.036119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.036449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.036838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.036857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.037225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.037604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.037617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.038016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.038388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.038401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.038810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.039234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.039247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.039650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.040027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.040041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.040411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.040814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.040827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.041181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.041585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.041597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.042026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.042396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.042409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.042806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.043227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.043240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.043609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.044025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.044037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.044277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.044606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.044618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.044824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.045216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.045230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.045644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.046055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.046068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.046339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.046703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.046716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.047113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.047488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.047501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.047906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.048307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.048325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.048728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.049184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.062 [2024-05-15 19:46:42.049196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.062 qpair failed and we were unable to recover it. 00:31:16.062 [2024-05-15 19:46:42.049597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.049916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.049928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.050311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.050765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.050778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.051176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.051574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.051586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.051961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.052329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.052342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.052715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.053093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.053105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.053255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.053657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.053671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.053891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.054258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.054271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.054594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.054825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.054837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.055207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.055587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.055599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.055842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.056102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.056113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.058323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.058747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.058764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.059175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.059558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.059576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.059958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.060387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.060403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.060822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.061084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.061098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.061506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.061882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.061898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.062301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.062684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.062701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.063071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.063479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.063496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.063897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.064140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.064155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.064548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.064959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.064978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.065375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.065819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.065830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.066090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.066377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.066387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.066766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.067170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.067179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.067568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.067851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.067859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.068114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.068513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.068521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.068926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.069195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.069203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.069557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.069905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.069913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.070327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.070724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.070731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.071099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.071455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.071463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.071860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.072211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.072218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.063 qpair failed and we were unable to recover it. 00:31:16.063 [2024-05-15 19:46:42.072606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.073002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.063 [2024-05-15 19:46:42.073011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.073328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.073657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.073665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.074038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.074445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.074453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.074861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.075270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.075278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.075479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.075819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.075826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.076203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.076495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.076503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.076893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.077294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.077301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.077687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.078089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.078096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.078463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.078830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.078838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.078967] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.064 [2024-05-15 19:46:42.078996] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.064 [2024-05-15 19:46:42.079004] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:16.064 [2024-05-15 19:46:42.079045] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:16.064 [2024-05-15 19:46:42.079046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.079051] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.064 [2024-05-15 19:46:42.079226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:16.064 [2024-05-15 19:46:42.079369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.079378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.079364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:16.064 [2024-05-15 19:46:42.079532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:16.064 [2024-05-15 19:46:42.079534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:16.064 [2024-05-15 19:46:42.079757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.080175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.080184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.080583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.080993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.081001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.081380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.081630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.081638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.082018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.082299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.082307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.082695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.083101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.083109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.083500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.083859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.083867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.084234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.084479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.084488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.084822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.085163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.085171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.085545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.085792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.085799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.086111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.086525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.086534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.086900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.087170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.087179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.087460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.087856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.087864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.088138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.088388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.088396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.088768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.089044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.089052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.089425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.089699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.089707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.090075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.090478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.090486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.090853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.091257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.091265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.064 [2024-05-15 19:46:42.091679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.091899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.064 [2024-05-15 19:46:42.091907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.064 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.092273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.092644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.092652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.093022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.093305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.093330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.093701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.093866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.093875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.094174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.094566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.094575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.094951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.095306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.095318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.095753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.095956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.095964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.096336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.096674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.096683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.097082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.097481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.097490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.097686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.098059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.098068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.098452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.098861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.098869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.099245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.099606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.099615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.100010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.100411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.100419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.100627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.101019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.101028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.101399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.101760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.101767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.102136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.102539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.102548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.102942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.103185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.103194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.103585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.103727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.103735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.103979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.104336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.104344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.104571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.104947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.104956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.105226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.105604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.105613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.105845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.106199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.106208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.106484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.106728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.106736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.106930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.107254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.107262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.107492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.107800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.107807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.108223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.108549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.108557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.108927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.109292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.109300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.109541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.109953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.109961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.110194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.110411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.110419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.110763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.111172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.111181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.065 qpair failed and we were unable to recover it. 00:31:16.065 [2024-05-15 19:46:42.111391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.111777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.065 [2024-05-15 19:46:42.111785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.112154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.112514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.112523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.112739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.112952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.112960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.113126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.113494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.113504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.113824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.114216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.114224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.114610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.114968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.114976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.115374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.115778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.115786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.115983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.116234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.116241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.116464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.116850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.116859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.117228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.117633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.117641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.118069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.118433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.118441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.118809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.119125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.119132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.119344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.119752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.119760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.119967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.120372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.120380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.120626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.120955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.120963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.121240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.121403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.121412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.121760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.121971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.121978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.122365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.122571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.122578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.122783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.123152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.123159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.123433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.123843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.123851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.124243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.124604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.124612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.124980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.125343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.125352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.125725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.126010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.126018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.126384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.126787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.126797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.127161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.127335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.127343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.127766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.128188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.128196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.128587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.128801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.128808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.129182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.129579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.129587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.129963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.130371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.130378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.130754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.131148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.131157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.066 qpair failed and we were unable to recover it. 00:31:16.066 [2024-05-15 19:46:42.131628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.066 [2024-05-15 19:46:42.131986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.131994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.132367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.132746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.132754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.133085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.133484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.133492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.133870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.134274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.134284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.134689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.134914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.134922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.135315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.135522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.135530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.135890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.136302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.136311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.136698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.137038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.137047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.137437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.137799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.137807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.138265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.138559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.138567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.138931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.139330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.139338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.139727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.140083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.140090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.140481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.140828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.140835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.140906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.141262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.141272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.141670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.141912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.141919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.142293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.142509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.142517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.142861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.143039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.143048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.143382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.143743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.143751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.144072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.144446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.144454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.144732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.145139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.145147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.145506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.145889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.145897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.146263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.146662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.146670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.147039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.147215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.147223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.147587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.147941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.147952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.148342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.148590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.148598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.067 qpair failed and we were unable to recover it. 00:31:16.067 [2024-05-15 19:46:42.148965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.149368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.067 [2024-05-15 19:46:42.149375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.149704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.150063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.150070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.150444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.150850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.150857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.151214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.151454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.151462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.151833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.152238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.152245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.152525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.152883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.152891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.153126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.153532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.153540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.153931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.154331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.154339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.154571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.154948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.154955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.155349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.155703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.155711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.155949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.156330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.156338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.156728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.157127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.157134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.157364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.157729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.157737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.158112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.158522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.158530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.158890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.159282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.159290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.159682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.160035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.160044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.160413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.160852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.160860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.161050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.161435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.161443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.161781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.162137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.162145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.162538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.162938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.162946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.163216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.163422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.163431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.163832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.164233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.164241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.164532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.164903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.164910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.165301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.165700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.165708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.166077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.166431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.166440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.166692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.167098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.167106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.167481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.167656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.167664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.167877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.168053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.168060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.168250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.168629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.168637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.068 [2024-05-15 19:46:42.169010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.169412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.068 [2024-05-15 19:46:42.169420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.068 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.169782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.170029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.170036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.170284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.170662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.170670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.171038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.171440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.171448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.171817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.172220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.172228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.172618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.172983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.172991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.173258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.173504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.173513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.173773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.174021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.174030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.174421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.174626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.174633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.174893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.175293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.175301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.175503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.175815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.175823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.176198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.176577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.176586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.176964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.177023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.177030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.177373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.177741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.177749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.177949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.178308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.178320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.178681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.179080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.179087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.179461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.179710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.179717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.180116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.180520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.180527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.180921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.180970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.180977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.181097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.181502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.181511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.181904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.182117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.182124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.182522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.182805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.182813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.183014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.183186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.183195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.183572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.183979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.183988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.184387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.184747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.184754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.185132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.185488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.185496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.185889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.186099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.186107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.186494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.186733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.186742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.186933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.187325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.187334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.187716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.188015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.188023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.069 qpair failed and we were unable to recover it. 00:31:16.069 [2024-05-15 19:46:42.188419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.069 [2024-05-15 19:46:42.188802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.188810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.189228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.189425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.189432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.189675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.190074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.190081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.190464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.190694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.190701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.191070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.191441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.191450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.191825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.192118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.192126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.192486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.192688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.192696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.193076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.193479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.193487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.193858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.194275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.194282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.194701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.195103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.195110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.195478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.195678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.195686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.195924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.196324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.196332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.196575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.196645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.196653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.197030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.197387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.197396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.197761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.198043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.198052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.198123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.198477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.198486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.198880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.199085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.199093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.199482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.199648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.199655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.199859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.200220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.200228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.200631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.201035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.201042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.201364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.201755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.201763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.202134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.202447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.202455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.202821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.203171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.203179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.203555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.203964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.203972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.204369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.204753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.204761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.205134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.205540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.205548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.205914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.206274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.206282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.206482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.206711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.206719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.207103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.207462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.207470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.070 [2024-05-15 19:46:42.207753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.208113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.070 [2024-05-15 19:46:42.208120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.070 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.208491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.208899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.208908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.208968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.209325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.209335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.209569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.209775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.209783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.210011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.210256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.210264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.210635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.210990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.210998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.211273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.211492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.211501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.211846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.212244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.212252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.212486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.212879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.212887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.213177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.213246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.213255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.213604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.213964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.213972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.214309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.214522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.214531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.214725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.214984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.214991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.215367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.215712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.215719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.216093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.216455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.216463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.216674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.217035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.217043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.217239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.217616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.217624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.218014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.218369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.218377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.218754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.219134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.219143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.219544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.219851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.219858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.220239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.220640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.220648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.221075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.221481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.221489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.221855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.222059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.222067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.222431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.222705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.222714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.223164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.223426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.223435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.071 qpair failed and we were unable to recover it. 00:31:16.071 [2024-05-15 19:46:42.223682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.071 [2024-05-15 19:46:42.224087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.224095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.072 qpair failed and we were unable to recover it. 00:31:16.072 [2024-05-15 19:46:42.224464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.224817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.224824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.072 qpair failed and we were unable to recover it. 00:31:16.072 [2024-05-15 19:46:42.225221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.225583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.225592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.072 qpair failed and we were unable to recover it. 00:31:16.072 [2024-05-15 19:46:42.225962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.226183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.226192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.072 qpair failed and we were unable to recover it. 00:31:16.072 [2024-05-15 19:46:42.226560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.226970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.226978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.072 qpair failed and we were unable to recover it. 00:31:16.072 [2024-05-15 19:46:42.227349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.227761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.227769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.072 qpair failed and we were unable to recover it. 00:31:16.072 [2024-05-15 19:46:42.227971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.228209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.228217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.072 qpair failed and we were unable to recover it. 00:31:16.072 [2024-05-15 19:46:42.228450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.228849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.228857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.072 qpair failed and we were unable to recover it. 00:31:16.072 [2024-05-15 19:46:42.229225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.229592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.229600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.072 qpair failed and we were unable to recover it. 00:31:16.072 [2024-05-15 19:46:42.229968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.230384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.230393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.072 qpair failed and we were unable to recover it. 00:31:16.072 [2024-05-15 19:46:42.230597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.230842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.230851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.072 qpair failed and we were unable to recover it. 00:31:16.072 [2024-05-15 19:46:42.231263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.231685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.231693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.072 qpair failed and we were unable to recover it. 00:31:16.072 [2024-05-15 19:46:42.231889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.232216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.232224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.072 qpair failed and we were unable to recover it. 00:31:16.072 [2024-05-15 19:46:42.232361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.232684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.232692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.072 qpair failed and we were unable to recover it. 00:31:16.072 [2024-05-15 19:46:42.233090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.233504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.072 [2024-05-15 19:46:42.233512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.072 qpair failed and we were unable to recover it. 00:31:16.345 [2024-05-15 19:46:42.233888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.345 [2024-05-15 19:46:42.234240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.234248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.346 qpair failed and we were unable to recover it. 00:31:16.346 [2024-05-15 19:46:42.234621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.235025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.235036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.346 qpair failed and we were unable to recover it. 00:31:16.346 [2024-05-15 19:46:42.235407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.235817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.235825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.346 qpair failed and we were unable to recover it. 00:31:16.346 [2024-05-15 19:46:42.236188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.236559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.236567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.346 qpair failed and we were unable to recover it. 00:31:16.346 [2024-05-15 19:46:42.236940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.237160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.237168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.346 qpair failed and we were unable to recover it. 00:31:16.346 [2024-05-15 19:46:42.237548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.237798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.237806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.346 qpair failed and we were unable to recover it. 00:31:16.346 [2024-05-15 19:46:42.238187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.238543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.238551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.346 qpair failed and we were unable to recover it. 00:31:16.346 [2024-05-15 19:46:42.238762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.238996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.239004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.346 qpair failed and we were unable to recover it. 00:31:16.346 [2024-05-15 19:46:42.239418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.239819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.239827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.346 qpair failed and we were unable to recover it. 00:31:16.346 [2024-05-15 19:46:42.240196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.240577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.240584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.346 qpair failed and we were unable to recover it. 00:31:16.346 [2024-05-15 19:46:42.240953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.241162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.241169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.346 qpair failed and we were unable to recover it. 00:31:16.346 [2024-05-15 19:46:42.241545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.241830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.241840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.346 qpair failed and we were unable to recover it. 00:31:16.346 [2024-05-15 19:46:42.242233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.242478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.242487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.346 qpair failed and we were unable to recover it. 00:31:16.346 [2024-05-15 19:46:42.242869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.243227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.243235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.346 qpair failed and we were unable to recover it. 00:31:16.346 [2024-05-15 19:46:42.243447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.243779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.243787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.346 qpair failed and we were unable to recover it. 00:31:16.346 [2024-05-15 19:46:42.244181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.244403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.244411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.346 qpair failed and we were unable to recover it. 00:31:16.346 [2024-05-15 19:46:42.244778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.245000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.245009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.346 qpair failed and we were unable to recover it. 00:31:16.346 [2024-05-15 19:46:42.245210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.245591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.245599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.346 qpair failed and we were unable to recover it. 00:31:16.346 [2024-05-15 19:46:42.246054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.246416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.246423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.346 qpair failed and we were unable to recover it. 00:31:16.346 [2024-05-15 19:46:42.246635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.246818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.346 [2024-05-15 19:46:42.246827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.346 qpair failed and we were unable to recover it. 00:31:16.347 [2024-05-15 19:46:42.247095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.247304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.247312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.347 qpair failed and we were unable to recover it. 00:31:16.347 [2024-05-15 19:46:42.247686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.248087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.248096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.347 qpair failed and we were unable to recover it. 00:31:16.347 [2024-05-15 19:46:42.248418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.248803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.248811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.347 qpair failed and we were unable to recover it. 00:31:16.347 [2024-05-15 19:46:42.249204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.249582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.249590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.347 qpair failed and we were unable to recover it. 00:31:16.347 [2024-05-15 19:46:42.249871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.250281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.250289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.347 qpair failed and we were unable to recover it. 00:31:16.347 [2024-05-15 19:46:42.250490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.250741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.250749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.347 qpair failed and we were unable to recover it. 00:31:16.347 [2024-05-15 19:46:42.250983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.251390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.251398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.347 qpair failed and we were unable to recover it. 00:31:16.347 [2024-05-15 19:46:42.251739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.251966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.251974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.347 qpair failed and we were unable to recover it. 00:31:16.347 [2024-05-15 19:46:42.252027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.252380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.252388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.347 qpair failed and we were unable to recover it. 00:31:16.347 [2024-05-15 19:46:42.252657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.253057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.253064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.347 qpair failed and we were unable to recover it. 00:31:16.347 [2024-05-15 19:46:42.253435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.253763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.253771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.347 qpair failed and we were unable to recover it. 00:31:16.347 [2024-05-15 19:46:42.254003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.254373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.254381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.347 qpair failed and we were unable to recover it. 00:31:16.347 [2024-05-15 19:46:42.254578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.254937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.254944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.347 qpair failed and we were unable to recover it. 00:31:16.347 [2024-05-15 19:46:42.255347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.255760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.255768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.347 qpair failed and we were unable to recover it. 00:31:16.347 [2024-05-15 19:46:42.256143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.256574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.256582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.347 qpair failed and we were unable to recover it. 00:31:16.347 [2024-05-15 19:46:42.256782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.256974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.256983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.347 qpair failed and we were unable to recover it. 00:31:16.347 [2024-05-15 19:46:42.257355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.257637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.257644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.347 qpair failed and we were unable to recover it. 00:31:16.347 [2024-05-15 19:46:42.258011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.258415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.258423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.347 qpair failed and we were unable to recover it. 00:31:16.347 [2024-05-15 19:46:42.258882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.259043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.347 [2024-05-15 19:46:42.259051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.347 qpair failed and we were unable to recover it. 00:31:16.347 [2024-05-15 19:46:42.259434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.259846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.259854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.348 qpair failed and we were unable to recover it. 00:31:16.348 [2024-05-15 19:46:42.260227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.260476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.260483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.348 qpair failed and we were unable to recover it. 00:31:16.348 [2024-05-15 19:46:42.260863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.261273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.261281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.348 qpair failed and we were unable to recover it. 00:31:16.348 [2024-05-15 19:46:42.261656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.262044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.262052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.348 qpair failed and we were unable to recover it. 00:31:16.348 [2024-05-15 19:46:42.262445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.262844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.262852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.348 qpair failed and we were unable to recover it. 00:31:16.348 [2024-05-15 19:46:42.263220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.263433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.263441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.348 qpair failed and we were unable to recover it. 00:31:16.348 [2024-05-15 19:46:42.263828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.264228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.264235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.348 qpair failed and we were unable to recover it. 00:31:16.348 [2024-05-15 19:46:42.264521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.264884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.264892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.348 qpair failed and we were unable to recover it. 00:31:16.348 [2024-05-15 19:46:42.265284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.265609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.265617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.348 qpair failed and we were unable to recover it. 00:31:16.348 [2024-05-15 19:46:42.266001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.266411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.266418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.348 qpair failed and we were unable to recover it. 00:31:16.348 [2024-05-15 19:46:42.266661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.266944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.266952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.348 qpair failed and we were unable to recover it. 00:31:16.348 [2024-05-15 19:46:42.267315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.267708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.267716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.348 qpair failed and we were unable to recover it. 00:31:16.348 [2024-05-15 19:46:42.268096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.268501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.268509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.348 qpair failed and we were unable to recover it. 00:31:16.348 [2024-05-15 19:46:42.268878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.269129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.269138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.348 qpair failed and we were unable to recover it. 00:31:16.348 [2024-05-15 19:46:42.269504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.269710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.269719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.348 qpair failed and we were unable to recover it. 00:31:16.348 [2024-05-15 19:46:42.270065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.270283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.270291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.348 qpair failed and we were unable to recover it. 00:31:16.348 [2024-05-15 19:46:42.270572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.270928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.348 [2024-05-15 19:46:42.270935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.348 qpair failed and we were unable to recover it. 00:31:16.349 [2024-05-15 19:46:42.271206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.271549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.271557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.349 qpair failed and we were unable to recover it. 00:31:16.349 [2024-05-15 19:46:42.271925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.272145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.272153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.349 qpair failed and we were unable to recover it. 00:31:16.349 [2024-05-15 19:46:42.272455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.272733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.272742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.349 qpair failed and we were unable to recover it. 00:31:16.349 [2024-05-15 19:46:42.273135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.273537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.273545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.349 qpair failed and we were unable to recover it. 00:31:16.349 [2024-05-15 19:46:42.273915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.274113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.274120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.349 qpair failed and we were unable to recover it. 00:31:16.349 [2024-05-15 19:46:42.274428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.274787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.274795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.349 qpair failed and we were unable to recover it. 00:31:16.349 [2024-05-15 19:46:42.275010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.275357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.275365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.349 qpair failed and we were unable to recover it. 00:31:16.349 [2024-05-15 19:46:42.275717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.275961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.275968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.349 qpair failed and we were unable to recover it. 00:31:16.349 [2024-05-15 19:46:42.276395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.276780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.276788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.349 qpair failed and we were unable to recover it. 00:31:16.349 [2024-05-15 19:46:42.277151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.277542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.277550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.349 qpair failed and we were unable to recover it. 00:31:16.349 [2024-05-15 19:46:42.277921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.278278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.278285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.349 qpair failed and we were unable to recover it. 00:31:16.349 [2024-05-15 19:46:42.278600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.278992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.279000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.349 qpair failed and we were unable to recover it. 00:31:16.349 [2024-05-15 19:46:42.279367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.279771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.279778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.349 qpair failed and we were unable to recover it. 00:31:16.349 [2024-05-15 19:46:42.280147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.280516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.280524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.349 qpair failed and we were unable to recover it. 00:31:16.349 [2024-05-15 19:46:42.280891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.281255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.281263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.349 qpair failed and we were unable to recover it. 00:31:16.349 [2024-05-15 19:46:42.281543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.281932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.281939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.349 qpair failed and we were unable to recover it. 00:31:16.349 [2024-05-15 19:46:42.282310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.282540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.282549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.349 qpair failed and we were unable to recover it. 00:31:16.349 [2024-05-15 19:46:42.282921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.283321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.283329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.349 qpair failed and we were unable to recover it. 00:31:16.349 [2024-05-15 19:46:42.283585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.349 [2024-05-15 19:46:42.283828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.283836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.350 qpair failed and we were unable to recover it. 00:31:16.350 [2024-05-15 19:46:42.284226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.284596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.284605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.350 qpair failed and we were unable to recover it. 00:31:16.350 [2024-05-15 19:46:42.284973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.285382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.285390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.350 qpair failed and we were unable to recover it. 00:31:16.350 [2024-05-15 19:46:42.285790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.286157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.286165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.350 qpair failed and we were unable to recover it. 00:31:16.350 [2024-05-15 19:46:42.286532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.286902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.286909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.350 qpair failed and we were unable to recover it. 00:31:16.350 [2024-05-15 19:46:42.287300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.287369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.287376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.350 qpair failed and we were unable to recover it. 00:31:16.350 [2024-05-15 19:46:42.287678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.288028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.288036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.350 qpair failed and we were unable to recover it. 00:31:16.350 [2024-05-15 19:46:42.288244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.288604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.288612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.350 qpair failed and we were unable to recover it. 00:31:16.350 [2024-05-15 19:46:42.288822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.288878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.288886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.350 qpair failed and we were unable to recover it. 00:31:16.350 [2024-05-15 19:46:42.289266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.289440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.289448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.350 qpair failed and we were unable to recover it. 00:31:16.350 [2024-05-15 19:46:42.289839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.290244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.290251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.350 qpair failed and we were unable to recover it. 00:31:16.350 [2024-05-15 19:46:42.290652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.290988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.290995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.350 qpair failed and we were unable to recover it. 00:31:16.350 [2024-05-15 19:46:42.291377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.291600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.291608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.350 qpair failed and we were unable to recover it. 00:31:16.350 [2024-05-15 19:46:42.292018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.292375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.292383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.350 qpair failed and we were unable to recover it. 00:31:16.350 [2024-05-15 19:46:42.292655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.292896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.292904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.350 qpair failed and we were unable to recover it. 00:31:16.350 [2024-05-15 19:46:42.293178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.293564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.293573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.350 qpair failed and we were unable to recover it. 00:31:16.350 [2024-05-15 19:46:42.293853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.294099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.294107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.350 qpair failed and we were unable to recover it. 00:31:16.350 [2024-05-15 19:46:42.294509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.294918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.294926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.350 qpair failed and we were unable to recover it. 00:31:16.350 [2024-05-15 19:46:42.295206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.295566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.295575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.350 qpair failed and we were unable to recover it. 00:31:16.350 [2024-05-15 19:46:42.295945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.296299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.296307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.350 qpair failed and we were unable to recover it. 00:31:16.350 [2024-05-15 19:46:42.296714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.350 [2024-05-15 19:46:42.297120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.297128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.351 qpair failed and we were unable to recover it. 00:31:16.351 [2024-05-15 19:46:42.297338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.297585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.297593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.351 qpair failed and we were unable to recover it. 00:31:16.351 [2024-05-15 19:46:42.297840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.298238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.298246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.351 qpair failed and we were unable to recover it. 00:31:16.351 [2024-05-15 19:46:42.298616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.298860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.298868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.351 qpair failed and we were unable to recover it. 00:31:16.351 [2024-05-15 19:46:42.299237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.299480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.299489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.351 qpair failed and we were unable to recover it. 00:31:16.351 [2024-05-15 19:46:42.299841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.300202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.300211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.351 qpair failed and we were unable to recover it. 00:31:16.351 [2024-05-15 19:46:42.300600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.300869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.300877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.351 qpair failed and we were unable to recover it. 00:31:16.351 [2024-05-15 19:46:42.301108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.301320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.301329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.351 qpair failed and we were unable to recover it. 00:31:16.351 [2024-05-15 19:46:42.301709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.301913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.301920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.351 qpair failed and we were unable to recover it. 00:31:16.351 [2024-05-15 19:46:42.302304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.302706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.302714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.351 qpair failed and we were unable to recover it. 00:31:16.351 [2024-05-15 19:46:42.303084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.303485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.303493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.351 qpair failed and we were unable to recover it. 00:31:16.351 [2024-05-15 19:46:42.303863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.304264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.304272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.351 qpair failed and we were unable to recover it. 00:31:16.351 [2024-05-15 19:46:42.304469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.304653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.304661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.351 qpair failed and we were unable to recover it. 00:31:16.351 [2024-05-15 19:46:42.305057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.305265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.305273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.351 qpair failed and we were unable to recover it. 00:31:16.351 [2024-05-15 19:46:42.305677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.306084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.306093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.351 qpair failed and we were unable to recover it. 00:31:16.351 [2024-05-15 19:46:42.306471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.306860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.306868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.351 qpair failed and we were unable to recover it. 00:31:16.351 [2024-05-15 19:46:42.307232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.351 [2024-05-15 19:46:42.307592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.307600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.352 qpair failed and we were unable to recover it. 00:31:16.352 [2024-05-15 19:46:42.307993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.308237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.308245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.352 qpair failed and we were unable to recover it. 00:31:16.352 [2024-05-15 19:46:42.308445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.308639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.308647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.352 qpair failed and we were unable to recover it. 00:31:16.352 [2024-05-15 19:46:42.308999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.309403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.309411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.352 qpair failed and we were unable to recover it. 00:31:16.352 [2024-05-15 19:46:42.309783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.309944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.309952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.352 qpair failed and we were unable to recover it. 00:31:16.352 [2024-05-15 19:46:42.310341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.310717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.310726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.352 qpair failed and we were unable to recover it. 00:31:16.352 [2024-05-15 19:46:42.311089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.311492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.311501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.352 qpair failed and we were unable to recover it. 00:31:16.352 [2024-05-15 19:46:42.311822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.312181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.312189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.352 qpair failed and we were unable to recover it. 00:31:16.352 [2024-05-15 19:46:42.312421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.312653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.312661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.352 qpair failed and we were unable to recover it. 00:31:16.352 [2024-05-15 19:46:42.313056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.313299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.313308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.352 qpair failed and we were unable to recover it. 00:31:16.352 [2024-05-15 19:46:42.313698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.314107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.314115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.352 qpair failed and we were unable to recover it. 00:31:16.352 [2024-05-15 19:46:42.314348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.314732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.314740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.352 qpair failed and we were unable to recover it. 00:31:16.352 [2024-05-15 19:46:42.315108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.315516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.315525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.352 qpair failed and we were unable to recover it. 00:31:16.352 [2024-05-15 19:46:42.315735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.316073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.316082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.352 qpair failed and we were unable to recover it. 00:31:16.352 [2024-05-15 19:46:42.316295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.316461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.316470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.352 qpair failed and we were unable to recover it. 00:31:16.352 [2024-05-15 19:46:42.316801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.317154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.317162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.352 qpair failed and we were unable to recover it. 00:31:16.352 [2024-05-15 19:46:42.317447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.317849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.317856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.352 qpair failed and we were unable to recover it. 00:31:16.352 [2024-05-15 19:46:42.318207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.352 [2024-05-15 19:46:42.318492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.353 [2024-05-15 19:46:42.318500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.353 qpair failed and we were unable to recover it. 00:31:16.353 [2024-05-15 19:46:42.318873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.353 [2024-05-15 19:46:42.319279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.353 [2024-05-15 19:46:42.319287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.353 qpair failed and we were unable to recover it. 00:31:16.353 [2024-05-15 19:46:42.319652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.353 [2024-05-15 19:46:42.319860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.353 [2024-05-15 19:46:42.319868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.353 qpair failed and we were unable to recover it. 00:31:16.353 [2024-05-15 19:46:42.319915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.353 [2024-05-15 19:46:42.320293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.353 [2024-05-15 19:46:42.320301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.353 qpair failed and we were unable to recover it. 00:31:16.353 [2024-05-15 19:46:42.320543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.353 [2024-05-15 19:46:42.320785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.353 [2024-05-15 19:46:42.320792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.353 qpair failed and we were unable to recover it. 00:31:16.353 [2024-05-15 19:46:42.320955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.353 [2024-05-15 19:46:42.321347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.353 [2024-05-15 19:46:42.321355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.353 qpair failed and we were unable to recover it. 00:31:16.353 [2024-05-15 19:46:42.321690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.353 [2024-05-15 19:46:42.322090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.353 [2024-05-15 19:46:42.322098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.353 qpair failed and we were unable to recover it. 00:31:16.353 [2024-05-15 19:46:42.322469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.353 [2024-05-15 19:46:42.322779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.353 [2024-05-15 19:46:42.322787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.353 qpair failed and we were unable to recover it. 00:31:16.353 [2024-05-15 19:46:42.323144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.353 [2024-05-15 19:46:42.323391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.353 [2024-05-15 19:46:42.323399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.353 qpair failed and we were unable to recover it. 00:31:16.353 [2024-05-15 19:46:42.323768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.353 [2024-05-15 19:46:42.323951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.353 [2024-05-15 19:46:42.323959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.354 qpair failed and we were unable to recover it. 00:31:16.354 [2024-05-15 19:46:42.324390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.324747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.324755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.354 qpair failed and we were unable to recover it. 00:31:16.354 [2024-05-15 19:46:42.325124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.325407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.325416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.354 qpair failed and we were unable to recover it. 00:31:16.354 [2024-05-15 19:46:42.325582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.325977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.325985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.354 qpair failed and we were unable to recover it. 00:31:16.354 [2024-05-15 19:46:42.326217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.326588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.326596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.354 qpair failed and we were unable to recover it. 00:31:16.354 [2024-05-15 19:46:42.326963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.327366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.327374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.354 qpair failed and we were unable to recover it. 00:31:16.354 [2024-05-15 19:46:42.327571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.327910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.327918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.354 qpair failed and we were unable to recover it. 00:31:16.354 [2024-05-15 19:46:42.328116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.328478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.328487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.354 qpair failed and we were unable to recover it. 00:31:16.354 [2024-05-15 19:46:42.328937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.329192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.329200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.354 qpair failed and we were unable to recover it. 00:31:16.354 [2024-05-15 19:46:42.329514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.329782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.329790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.354 qpair failed and we were unable to recover it. 00:31:16.354 [2024-05-15 19:46:42.330158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.330550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.330558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.354 qpair failed and we were unable to recover it. 00:31:16.354 [2024-05-15 19:46:42.330951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.331238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.331246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.354 qpair failed and we were unable to recover it. 00:31:16.354 [2024-05-15 19:46:42.331673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.332029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.332036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.354 qpair failed and we were unable to recover it. 00:31:16.354 [2024-05-15 19:46:42.332408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.332618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.332625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.354 qpair failed and we were unable to recover it. 00:31:16.354 [2024-05-15 19:46:42.333009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.333248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.333256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.354 qpair failed and we were unable to recover it. 00:31:16.354 [2024-05-15 19:46:42.333621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.334025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.334033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.354 qpair failed and we were unable to recover it. 00:31:16.354 [2024-05-15 19:46:42.334402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.334804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.334813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.354 qpair failed and we were unable to recover it. 00:31:16.354 [2024-05-15 19:46:42.335183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.335391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.335398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.354 qpair failed and we were unable to recover it. 00:31:16.354 [2024-05-15 19:46:42.335770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.336174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.336181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.354 qpair failed and we were unable to recover it. 00:31:16.354 [2024-05-15 19:46:42.336580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.336936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.336944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.354 qpair failed and we were unable to recover it. 00:31:16.354 [2024-05-15 19:46:42.337308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.337544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.354 [2024-05-15 19:46:42.337553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.354 qpair failed and we were unable to recover it. 00:31:16.354 [2024-05-15 19:46:42.337907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.338316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.338325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.355 qpair failed and we were unable to recover it. 00:31:16.355 [2024-05-15 19:46:42.338544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.338902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.338910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.355 qpair failed and we were unable to recover it. 00:31:16.355 [2024-05-15 19:46:42.339317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.339698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.339706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.355 qpair failed and we were unable to recover it. 00:31:16.355 [2024-05-15 19:46:42.340064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.340574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.340605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.355 qpair failed and we were unable to recover it. 00:31:16.355 [2024-05-15 19:46:42.340984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.341346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.341355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.355 qpair failed and we were unable to recover it. 00:31:16.355 [2024-05-15 19:46:42.341730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.342094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.342106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.355 qpair failed and we were unable to recover it. 00:31:16.355 [2024-05-15 19:46:42.342392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.342704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.342713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.355 qpair failed and we were unable to recover it. 00:31:16.355 [2024-05-15 19:46:42.342901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.343123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.343131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.355 qpair failed and we were unable to recover it. 00:31:16.355 [2024-05-15 19:46:42.343198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.343391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.343399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.355 qpair failed and we were unable to recover it. 00:31:16.355 [2024-05-15 19:46:42.343782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.344137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.344145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.355 qpair failed and we were unable to recover it. 00:31:16.355 [2024-05-15 19:46:42.344538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.344943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.344952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.355 qpair failed and we were unable to recover it. 00:31:16.355 [2024-05-15 19:46:42.345166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.345538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.345546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.355 qpair failed and we were unable to recover it. 00:31:16.355 [2024-05-15 19:46:42.345756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.346049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.346057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.355 qpair failed and we were unable to recover it. 00:31:16.355 [2024-05-15 19:46:42.346426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.346616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.346625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.355 qpair failed and we were unable to recover it. 00:31:16.355 [2024-05-15 19:46:42.346983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.347389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.347396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.355 qpair failed and we were unable to recover it. 00:31:16.355 [2024-05-15 19:46:42.347765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.348034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.348044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.355 qpair failed and we were unable to recover it. 00:31:16.355 [2024-05-15 19:46:42.348256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.348585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.348593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.355 qpair failed and we were unable to recover it. 00:31:16.355 [2024-05-15 19:46:42.348809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.348865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.348872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.355 qpair failed and we were unable to recover it. 00:31:16.355 [2024-05-15 19:46:42.349250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.349613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.349621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.355 qpair failed and we were unable to recover it. 00:31:16.355 [2024-05-15 19:46:42.349793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.350190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.355 [2024-05-15 19:46:42.350198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.355 qpair failed and we were unable to recover it. 00:31:16.355 [2024-05-15 19:46:42.350435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.350719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.350726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.350970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.351127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.351135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.351484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.351883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.351891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.352089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.352424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.352432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.352792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.353189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.353197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.353586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.353947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.353957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.354356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.354564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.354571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.355000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.355300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.355308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.355507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.355852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.355859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.356228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.356590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.356598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.356988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.357403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.357411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.357791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.358196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.358204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.358500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.358879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.358887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.359261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.359616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.359624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.360051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.360340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.360351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.360737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.361091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.361099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.361493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.361822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.361829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.362216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.362590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.362597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.362985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.363385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.363393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.363616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.363774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.363783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.364030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.364401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.364409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.364641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.364816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.364824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.365199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.365418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.365425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.365791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.366147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.366155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.356 [2024-05-15 19:46:42.366529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.366900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.356 [2024-05-15 19:46:42.366907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.356 qpair failed and we were unable to recover it. 00:31:16.357 [2024-05-15 19:46:42.367118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.367417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.367425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.357 qpair failed and we were unable to recover it. 00:31:16.357 [2024-05-15 19:46:42.367647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.368030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.368038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.357 qpair failed and we were unable to recover it. 00:31:16.357 [2024-05-15 19:46:42.368252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.368586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.368594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.357 qpair failed and we were unable to recover it. 00:31:16.357 [2024-05-15 19:46:42.368794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.369135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.369143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.357 qpair failed and we were unable to recover it. 00:31:16.357 [2024-05-15 19:46:42.369514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.369870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.369878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.357 qpair failed and we were unable to recover it. 00:31:16.357 [2024-05-15 19:46:42.370270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.370666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.370673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.357 qpair failed and we were unable to recover it. 00:31:16.357 [2024-05-15 19:46:42.371047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.371403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.371411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.357 qpair failed and we were unable to recover it. 00:31:16.357 [2024-05-15 19:46:42.371630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.372018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.372025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.357 qpair failed and we were unable to recover it. 00:31:16.357 [2024-05-15 19:46:42.372371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.372658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.372665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.357 qpair failed and we were unable to recover it. 00:31:16.357 [2024-05-15 19:46:42.373017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.373417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.373426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.357 qpair failed and we were unable to recover it. 00:31:16.357 [2024-05-15 19:46:42.373633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.373873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.373881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.357 qpair failed and we were unable to recover it. 00:31:16.357 [2024-05-15 19:46:42.374092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.374318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.374327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.357 qpair failed and we were unable to recover it. 00:31:16.357 [2024-05-15 19:46:42.374587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.374901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.374908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.357 qpair failed and we were unable to recover it. 00:31:16.357 [2024-05-15 19:46:42.375306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.375663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.375670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.357 qpair failed and we were unable to recover it. 00:31:16.357 [2024-05-15 19:46:42.376040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.376392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.376401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.357 qpair failed and we were unable to recover it. 00:31:16.357 [2024-05-15 19:46:42.376725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.376899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.357 [2024-05-15 19:46:42.376908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.357 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.377282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.377496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.377504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.358 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.377908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.378150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.378157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.358 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.378521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.378922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.378930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.358 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.378978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.379339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.379347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.358 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.379723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.380125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.380132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.358 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.380491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.380740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.380747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.358 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.381116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.381333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.381340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.358 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.381722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.381931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.381938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.358 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.382318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.382703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.382710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.358 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.383104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.383349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.383357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.358 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.383735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.384141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.384149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.358 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.384403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.384598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.384606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.358 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.384970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.385373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.385381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.358 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.385750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.386152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.386160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.358 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.386448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.386810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.386818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.358 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.387189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.387538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.387545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.358 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.387777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.388020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.388027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.358 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.388258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.388660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.388668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.358 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.388879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.389246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.389254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.358 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.389470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.389721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.389729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.358 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.390093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.390508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.358 [2024-05-15 19:46:42.390516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.358 qpair failed and we were unable to recover it. 00:31:16.358 [2024-05-15 19:46:42.390922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.391166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.391174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.359 qpair failed and we were unable to recover it. 00:31:16.359 [2024-05-15 19:46:42.391544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.391958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.391966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.359 qpair failed and we were unable to recover it. 00:31:16.359 [2024-05-15 19:46:42.392421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.392838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.392845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.359 qpair failed and we were unable to recover it. 00:31:16.359 [2024-05-15 19:46:42.393216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.393577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.393585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.359 qpair failed and we were unable to recover it. 00:31:16.359 [2024-05-15 19:46:42.394025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.394382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.394390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.359 qpair failed and we were unable to recover it. 00:31:16.359 [2024-05-15 19:46:42.394759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.394967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.394975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.359 qpair failed and we were unable to recover it. 00:31:16.359 [2024-05-15 19:46:42.395356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.395732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.395739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.359 qpair failed and we were unable to recover it. 00:31:16.359 [2024-05-15 19:46:42.395797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.396146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.396154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.359 qpair failed and we were unable to recover it. 00:31:16.359 [2024-05-15 19:46:42.396544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.396946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.396954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.359 qpair failed and we were unable to recover it. 00:31:16.359 [2024-05-15 19:46:42.397325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.397507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.397514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.359 qpair failed and we were unable to recover it. 00:31:16.359 [2024-05-15 19:46:42.397778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.398170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.398178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.359 qpair failed and we were unable to recover it. 00:31:16.359 [2024-05-15 19:46:42.398548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.398755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.398763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.359 qpair failed and we were unable to recover it. 00:31:16.359 [2024-05-15 19:46:42.399130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.399486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.399493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.359 qpair failed and we were unable to recover it. 00:31:16.359 [2024-05-15 19:46:42.399701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.400096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.400103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.359 qpair failed and we were unable to recover it. 00:31:16.359 [2024-05-15 19:46:42.400492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.400589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.400597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.359 qpair failed and we were unable to recover it. 00:31:16.359 [2024-05-15 19:46:42.400854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.401017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.359 [2024-05-15 19:46:42.401025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.360 qpair failed and we were unable to recover it. 00:31:16.360 [2024-05-15 19:46:42.401415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.401814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.401822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.360 qpair failed and we were unable to recover it. 00:31:16.360 [2024-05-15 19:46:42.402182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.402390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.402397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.360 qpair failed and we were unable to recover it. 00:31:16.360 [2024-05-15 19:46:42.402579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.402942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.402950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.360 qpair failed and we were unable to recover it. 00:31:16.360 [2024-05-15 19:46:42.403149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.403490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.403498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.360 qpair failed and we were unable to recover it. 00:31:16.360 [2024-05-15 19:46:42.403844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.404248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.404256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.360 qpair failed and we were unable to recover it. 00:31:16.360 [2024-05-15 19:46:42.404625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.405033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.405041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.360 qpair failed and we were unable to recover it. 00:31:16.360 [2024-05-15 19:46:42.405366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.405758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.405766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.360 qpair failed and we were unable to recover it. 00:31:16.360 [2024-05-15 19:46:42.406054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.406459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.406467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.360 qpair failed and we were unable to recover it. 00:31:16.360 [2024-05-15 19:46:42.406894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.407255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.407263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.360 qpair failed and we were unable to recover it. 00:31:16.360 [2024-05-15 19:46:42.407634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.408041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.408049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.360 qpair failed and we were unable to recover it. 00:31:16.360 [2024-05-15 19:46:42.408420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.408781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.408789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.360 qpair failed and we were unable to recover it. 00:31:16.360 [2024-05-15 19:46:42.409179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.409573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.409581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.360 qpair failed and we were unable to recover it. 00:31:16.360 [2024-05-15 19:46:42.409972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.410262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.410269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.360 qpair failed and we were unable to recover it. 00:31:16.360 [2024-05-15 19:46:42.410641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.411000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.411007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.360 qpair failed and we were unable to recover it. 00:31:16.360 [2024-05-15 19:46:42.411377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.411774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.411781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.360 qpair failed and we were unable to recover it. 00:31:16.360 [2024-05-15 19:46:42.412155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.412518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.412526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.360 qpair failed and we were unable to recover it. 00:31:16.360 [2024-05-15 19:46:42.412917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.413158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.360 [2024-05-15 19:46:42.413165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.360 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.413580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.413919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.413926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.414173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.414339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.414348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.414684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.415083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.415091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.415486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.415727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.415735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.416103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.416451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.416459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.416854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.417137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.417146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.417543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.417718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.417725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.418058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.418460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.418469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.418837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.419240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.419247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.419319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.419664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.419672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.419904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.420262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.420271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.420627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.420964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.420972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.421182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.421555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.421564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.421932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.422285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.422292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.422661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.423060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.423068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.423498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.423669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.423676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.424058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.424442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.424450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.424721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.425122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.425131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.425498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.425717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.425724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.425781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.425967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.425975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.426341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.426632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.426640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.427046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.427458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.427465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.427691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.428106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.428113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.428494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.428716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.428723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.361 qpair failed and we were unable to recover it. 00:31:16.361 [2024-05-15 19:46:42.429105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.429293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.361 [2024-05-15 19:46:42.429301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.362 qpair failed and we were unable to recover it. 00:31:16.362 [2024-05-15 19:46:42.429686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.430041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.430049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.362 qpair failed and we were unable to recover it. 00:31:16.362 [2024-05-15 19:46:42.430258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.430628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.430637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.362 qpair failed and we were unable to recover it. 00:31:16.362 [2024-05-15 19:46:42.430845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.431072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.431081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.362 qpair failed and we were unable to recover it. 00:31:16.362 [2024-05-15 19:46:42.431322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.431538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.431547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.362 qpair failed and we were unable to recover it. 00:31:16.362 [2024-05-15 19:46:42.431801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.432021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.432028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.362 qpair failed and we were unable to recover it. 00:31:16.362 [2024-05-15 19:46:42.432380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.432589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.432596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.362 qpair failed and we were unable to recover it. 00:31:16.362 [2024-05-15 19:46:42.432983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.433394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.433402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.362 qpair failed and we were unable to recover it. 00:31:16.362 [2024-05-15 19:46:42.433849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.434161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.434168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.362 qpair failed and we were unable to recover it. 00:31:16.362 [2024-05-15 19:46:42.434543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.434914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.434922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.362 qpair failed and we were unable to recover it. 00:31:16.362 [2024-05-15 19:46:42.435292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.435689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.435697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.362 qpair failed and we were unable to recover it. 00:31:16.362 [2024-05-15 19:46:42.436089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.436449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.436458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.362 qpair failed and we were unable to recover it. 00:31:16.362 [2024-05-15 19:46:42.436852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.437057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.437065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.362 qpair failed and we were unable to recover it. 00:31:16.362 [2024-05-15 19:46:42.437435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.437691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.437699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.362 qpair failed and we were unable to recover it. 00:31:16.362 [2024-05-15 19:46:42.438070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.438468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.438476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.362 qpair failed and we were unable to recover it. 00:31:16.362 [2024-05-15 19:46:42.438711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.438919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.438927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.362 qpair failed and we were unable to recover it. 00:31:16.362 [2024-05-15 19:46:42.439295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.439498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.439506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.362 qpair failed and we were unable to recover it. 00:31:16.362 [2024-05-15 19:46:42.439667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.439863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.439873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.362 qpair failed and we were unable to recover it. 00:31:16.362 [2024-05-15 19:46:42.440260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.440422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.440430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.362 qpair failed and we were unable to recover it. 00:31:16.362 [2024-05-15 19:46:42.440850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.441056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.441063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.362 qpair failed and we were unable to recover it. 00:31:16.362 [2024-05-15 19:46:42.441289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.441510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.362 [2024-05-15 19:46:42.441517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.362 qpair failed and we were unable to recover it. 00:31:16.362 [2024-05-15 19:46:42.441863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.442265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.442273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.363 qpair failed and we were unable to recover it. 00:31:16.363 [2024-05-15 19:46:42.442642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.443003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.443010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.363 qpair failed and we were unable to recover it. 00:31:16.363 [2024-05-15 19:46:42.443215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.443594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.443602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.363 qpair failed and we were unable to recover it. 00:31:16.363 [2024-05-15 19:46:42.443802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.444158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.444166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.363 qpair failed and we were unable to recover it. 00:31:16.363 [2024-05-15 19:46:42.444536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.444872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.444879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.363 qpair failed and we were unable to recover it. 00:31:16.363 [2024-05-15 19:46:42.445266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.445669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.445676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.363 qpair failed and we were unable to recover it. 00:31:16.363 [2024-05-15 19:46:42.446076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.446133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.446142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.363 qpair failed and we were unable to recover it. 00:31:16.363 [2024-05-15 19:46:42.446488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.446731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.446738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.363 qpair failed and we were unable to recover it. 00:31:16.363 [2024-05-15 19:46:42.446794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.447007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.447015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.363 qpair failed and we were unable to recover it. 00:31:16.363 [2024-05-15 19:46:42.447215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.447447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.447456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.363 qpair failed and we were unable to recover it. 00:31:16.363 [2024-05-15 19:46:42.447846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.448055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.448062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.363 qpair failed and we were unable to recover it. 00:31:16.363 [2024-05-15 19:46:42.448431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.448810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.448818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.363 qpair failed and we were unable to recover it. 00:31:16.363 [2024-05-15 19:46:42.449188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.449590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.449598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.363 qpair failed and we were unable to recover it. 00:31:16.363 [2024-05-15 19:46:42.450009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.450214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.450221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.363 qpair failed and we were unable to recover it. 00:31:16.363 [2024-05-15 19:46:42.450612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.451016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.451023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.363 qpair failed and we were unable to recover it. 00:31:16.363 [2024-05-15 19:46:42.451256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.451498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.451506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.363 qpair failed and we were unable to recover it. 00:31:16.363 [2024-05-15 19:46:42.451875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.452226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.452235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.363 qpair failed and we were unable to recover it. 00:31:16.363 [2024-05-15 19:46:42.452596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.452928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.452939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.363 qpair failed and we were unable to recover it. 00:31:16.363 [2024-05-15 19:46:42.453158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.363 [2024-05-15 19:46:42.453328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.453336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.364 qpair failed and we were unable to recover it. 00:31:16.364 [2024-05-15 19:46:42.453523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.453851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.453859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.364 qpair failed and we were unable to recover it. 00:31:16.364 [2024-05-15 19:46:42.454229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.454591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.454600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.364 qpair failed and we were unable to recover it. 00:31:16.364 [2024-05-15 19:46:42.454970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.455376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.455384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.364 qpair failed and we were unable to recover it. 00:31:16.364 [2024-05-15 19:46:42.455776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.456127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.456135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.364 qpair failed and we were unable to recover it. 00:31:16.364 [2024-05-15 19:46:42.456502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.456725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.456732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.364 qpair failed and we were unable to recover it. 00:31:16.364 [2024-05-15 19:46:42.457110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.457467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.457475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.364 qpair failed and we were unable to recover it. 00:31:16.364 [2024-05-15 19:46:42.457832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.458190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.458198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.364 qpair failed and we were unable to recover it. 00:31:16.364 [2024-05-15 19:46:42.458584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.458984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.458994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.364 qpair failed and we were unable to recover it. 00:31:16.364 [2024-05-15 19:46:42.459362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.459727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.459735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.364 qpair failed and we were unable to recover it. 00:31:16.364 [2024-05-15 19:46:42.460114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.460477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.460485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.364 qpair failed and we were unable to recover it. 00:31:16.364 [2024-05-15 19:46:42.460852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.461210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.461218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.364 qpair failed and we were unable to recover it. 00:31:16.364 [2024-05-15 19:46:42.461586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.461993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.462001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.364 qpair failed and we were unable to recover it. 00:31:16.364 [2024-05-15 19:46:42.462372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.462775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.462783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.364 qpair failed and we were unable to recover it. 00:31:16.364 [2024-05-15 19:46:42.463150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.463505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.463513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.364 qpair failed and we were unable to recover it. 00:31:16.364 [2024-05-15 19:46:42.463884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.464239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.464246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.364 qpair failed and we were unable to recover it. 00:31:16.364 [2024-05-15 19:46:42.464662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.465020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.465027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.364 qpair failed and we were unable to recover it. 00:31:16.364 [2024-05-15 19:46:42.465402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.465788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.465795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.364 qpair failed and we were unable to recover it. 00:31:16.364 [2024-05-15 19:46:42.466157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.466518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.466526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.364 qpair failed and we were unable to recover it. 00:31:16.364 [2024-05-15 19:46:42.466899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.467298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.467305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.364 qpair failed and we were unable to recover it. 00:31:16.364 [2024-05-15 19:46:42.467363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.467711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.364 [2024-05-15 19:46:42.467720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.364 qpair failed and we were unable to recover it. 00:31:16.364 [2024-05-15 19:46:42.468091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.468255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.468261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.468643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.468809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.468816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.469162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.469368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.469376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.469747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.470100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.470108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.470487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.470893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.470901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.471266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.471512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.471519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.471889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.472285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.472293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.472493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.472851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.472858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.472924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.473196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.473203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.473596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.473883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.473891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.474277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.474677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.474685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.475075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.475431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.475439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.475808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.475880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.475889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.476217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.476576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.476584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.476765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.476992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.477001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.477248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.477304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.477311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.477670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.478080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.478088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.478481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.478891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.478898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.479272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.479664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.479671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.479913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.480319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.480327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.480692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.481092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.481100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.481468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.481870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.481877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.482246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.482468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.482476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.482859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.483265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.483273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.483642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.483882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.483890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.365 qpair failed and we were unable to recover it. 00:31:16.365 [2024-05-15 19:46:42.484306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.484552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.365 [2024-05-15 19:46:42.484560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.484936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.485174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.485181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.485400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.485742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.485750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.486200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.486497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.486505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.486865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.487223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.487231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.487626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.488029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.488037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.488425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.488808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.488815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.489091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.489493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.489501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.489787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.490138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.490146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.490471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.490848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.490856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.491094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.491463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.491471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.491574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.491802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.491810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.492046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.492095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.492102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.492348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.492763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.492771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.493145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.493354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.493362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.493552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.493887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.493895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.494263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.494552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.494560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.494759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.495098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.495106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.495497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.495905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.495913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.496179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.496423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.496431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.496793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.497194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.497201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.497395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.497638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.497646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.497900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.498110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.498119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.498344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.498678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.498686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.499058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.499462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.499469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.499703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.500058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.500066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.500458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.500859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.500866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.501239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.501604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.501612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.501801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.502026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.502034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.502248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.502568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.502576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.502773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.503127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.503134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.503346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.503674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.503681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.503959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.504362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.504370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.504748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.504994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.505001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.505394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.505769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.505777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.506008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.506409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.506418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.506783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.507001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.507008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.366 [2024-05-15 19:46:42.507243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.507582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.366 [2024-05-15 19:46:42.507590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.366 qpair failed and we were unable to recover it. 00:31:16.367 [2024-05-15 19:46:42.507981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.508180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.508189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.367 qpair failed and we were unable to recover it. 00:31:16.367 [2024-05-15 19:46:42.508413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.508630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.508637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.367 qpair failed and we were unable to recover it. 00:31:16.367 [2024-05-15 19:46:42.508957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.509345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.509353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.367 qpair failed and we were unable to recover it. 00:31:16.367 [2024-05-15 19:46:42.509719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.510073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.510081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.367 qpair failed and we were unable to recover it. 00:31:16.367 [2024-05-15 19:46:42.510482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.510881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.510888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.367 qpair failed and we were unable to recover it. 00:31:16.367 [2024-05-15 19:46:42.511257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.511658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.511666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.367 qpair failed and we were unable to recover it. 00:31:16.367 [2024-05-15 19:46:42.511864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.512204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.512212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.367 qpair failed and we were unable to recover it. 00:31:16.367 [2024-05-15 19:46:42.512547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.512922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.512930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.367 qpair failed and we were unable to recover it. 00:31:16.367 [2024-05-15 19:46:42.513330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.513664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.513672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.367 qpair failed and we were unable to recover it. 00:31:16.367 [2024-05-15 19:46:42.514028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.514388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.514396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.367 qpair failed and we were unable to recover it. 00:31:16.367 [2024-05-15 19:46:42.514638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.514842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.514849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.367 qpair failed and we were unable to recover it. 00:31:16.367 [2024-05-15 19:46:42.515217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.515633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.515641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.367 qpair failed and we were unable to recover it. 00:31:16.367 [2024-05-15 19:46:42.515996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.516201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.516209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.367 qpair failed and we were unable to recover it. 00:31:16.367 [2024-05-15 19:46:42.516581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.516982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.516990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.367 qpair failed and we were unable to recover it. 00:31:16.367 [2024-05-15 19:46:42.517363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.517729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.517736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.367 qpair failed and we were unable to recover it. 00:31:16.367 [2024-05-15 19:46:42.517935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.518267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.518275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.367 qpair failed and we were unable to recover it. 00:31:16.367 [2024-05-15 19:46:42.518673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.518880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.518887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.367 qpair failed and we were unable to recover it. 00:31:16.367 [2024-05-15 19:46:42.519120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.519470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.519478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.367 qpair failed and we were unable to recover it. 00:31:16.367 [2024-05-15 19:46:42.519855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.520212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.520219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.367 qpair failed and we were unable to recover it. 00:31:16.367 [2024-05-15 19:46:42.520422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.520773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.367 [2024-05-15 19:46:42.520781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.367 qpair failed and we were unable to recover it. 00:31:16.639 [2024-05-15 19:46:42.521170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.521367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.521375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.639 qpair failed and we were unable to recover it. 00:31:16.639 [2024-05-15 19:46:42.521733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.522140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.522148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.639 qpair failed and we were unable to recover it. 00:31:16.639 [2024-05-15 19:46:42.522507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.522868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.522876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.639 qpair failed and we were unable to recover it. 00:31:16.639 [2024-05-15 19:46:42.523118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.523477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.523485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.639 qpair failed and we were unable to recover it. 00:31:16.639 [2024-05-15 19:46:42.523883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.524125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.524133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.639 qpair failed and we were unable to recover it. 00:31:16.639 [2024-05-15 19:46:42.524508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.524911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.524920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.639 qpair failed and we were unable to recover it. 00:31:16.639 [2024-05-15 19:46:42.525294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.525537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.525546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.639 qpair failed and we were unable to recover it. 00:31:16.639 [2024-05-15 19:46:42.525917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.526272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.526281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.639 qpair failed and we were unable to recover it. 00:31:16.639 [2024-05-15 19:46:42.526481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.526874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.526882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.639 qpair failed and we were unable to recover it. 00:31:16.639 [2024-05-15 19:46:42.527309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.527694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.527702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.639 qpair failed and we were unable to recover it. 00:31:16.639 [2024-05-15 19:46:42.527902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.528242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.528250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.639 qpair failed and we were unable to recover it. 00:31:16.639 [2024-05-15 19:46:42.528481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.528867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.528875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.639 qpair failed and we were unable to recover it. 00:31:16.639 [2024-05-15 19:46:42.529212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.529587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.529596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.639 qpair failed and we were unable to recover it. 00:31:16.639 [2024-05-15 19:46:42.529955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.530324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.530332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.639 qpair failed and we were unable to recover it. 00:31:16.639 [2024-05-15 19:46:42.530678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.530917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.530925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.639 qpair failed and we were unable to recover it. 00:31:16.639 [2024-05-15 19:46:42.531299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.531708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.531717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.639 qpair failed and we were unable to recover it. 00:31:16.639 [2024-05-15 19:46:42.532111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.532355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.639 [2024-05-15 19:46:42.532362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.639 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.532737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.532908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.532915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.533131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.533376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.533384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.533757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.533963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.533971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.534390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.534722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.534729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.535093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.535317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.535325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.535634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.535873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.535881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.536089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.536273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.536281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.536657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.536971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.536979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.537190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.537533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.537543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.537756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.537994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.538003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.538376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.538698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.538705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.538942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.539151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.539159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.539541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.539946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.539953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.540320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.540506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.540514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.540856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.541258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.541265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.541632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.542037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.542044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.542281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.542602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.542610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.542980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.543380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.543388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.543760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.544161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.544171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.544569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.544698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.544705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.545014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.545398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.545406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.545652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.546008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.546015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.546383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.546665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.546673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.546866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.547035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.547042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.547294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.547654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.547662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.547858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.548197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.548205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.548274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.548527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.548535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.548913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.549269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.549276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.549644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.550001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.640 [2024-05-15 19:46:42.550010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.640 qpair failed and we were unable to recover it. 00:31:16.640 [2024-05-15 19:46:42.550378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.550738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.550746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.551140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.551490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.551497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.551888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.552104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.552113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.552487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.552869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.552877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.553268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.553509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.553517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.553889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.554290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.554297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.554690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.555039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.555046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.555416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.555607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.555614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.555950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.556352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.556360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.556623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.557024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.557033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.557173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.557536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.557544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.557918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.558326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.558333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.558579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.558819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.558828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.559195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.559575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.559583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.559790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.560133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.560141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.560508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.560867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.560875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.561271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.561676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.561687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.561918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.562322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.562330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.562549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.562951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.562959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.563327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.563584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.563592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.563829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.564137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.564145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.564500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.564675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.564683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.565000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.565394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.565402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.565762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.566118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.566125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.566357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.566737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.566745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.567114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.567323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.567331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.567695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.567902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.567910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.568182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.568567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.568575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.568941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.569148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.569156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.641 [2024-05-15 19:46:42.569522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.569889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.641 [2024-05-15 19:46:42.569897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.641 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.570096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.570485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.570493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.570764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.571125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.571132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.571511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.571868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.571876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.572112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.572447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.572455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.572878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.573279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.573286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.573486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.573897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.573905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.573986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.574223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.574230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.574620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.574890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.574898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.575307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.575698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.575706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.576065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.576468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.576476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.576890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.577291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.577299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.577752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.578065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.578073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.578466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.578710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.578718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.579088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.579448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.579456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.579849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.580251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.580259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.580471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.580869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.580877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.581111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.581355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.581363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.581572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.581751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.581760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.581974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.582291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.582299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.582754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.582959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.582967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.583169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.583511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.583518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.583750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.583998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.584007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.584398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.584590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.584597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.584979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.585186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.585194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.585543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.585747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.585755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.586072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.586279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.586288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.586624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.587028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.587037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.587401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.587608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.587615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.587978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.588379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.588386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.642 qpair failed and we were unable to recover it. 00:31:16.642 [2024-05-15 19:46:42.588618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.642 [2024-05-15 19:46:42.588861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.588869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.589070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.589440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.589448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.589641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.589990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.589998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.590356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.590727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.590735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.590930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.591184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.591191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.591660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.592014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.592022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.592219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.592437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.592445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.592808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.593208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.593215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.593610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.594014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.594022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.594390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.594593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.594604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.594825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.595204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.595212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.595604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.596011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.596019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.596394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.596757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.596764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.596998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.597351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.597359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.597561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.597893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.597901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.598289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.598531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.598539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.598813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.599218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.599226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.599581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.599966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.599974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.600347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.600720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.600728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.600960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.601364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.601373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.601743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.602145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.602153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.602568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.602937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.602944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.603112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.603472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.603481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.603875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.604117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.604126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.643 [2024-05-15 19:46:42.604542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.604930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.643 [2024-05-15 19:46:42.604937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.643 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.605167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.605564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.605572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.605780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.605964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.605973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.606352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.606722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.606730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.606826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.607146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.607154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.607364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.607706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.607714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.608075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.608282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.608290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.608653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.609059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.609067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.609443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.609839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.609847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.610166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.610539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.610547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.610917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.611271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.611278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.611500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.611725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.611732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.612096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.612499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.612507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.612875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.613279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.613286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.613655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.613875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.613883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.614257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.614655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.614664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.615034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.615390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.615398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.615803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.616091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.616099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.616280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.616626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.616634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.616851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.617208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.617216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.617559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.617947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.617955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.618327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.618701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.618709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.618904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.619267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.619275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.619509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.619819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.619826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.620196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.620539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.620547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.620915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.621324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.621335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.621690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.621898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.621905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.622271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.622630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.622638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.622696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.623073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.623081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.623450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.623804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.623812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.644 qpair failed and we were unable to recover it. 00:31:16.644 [2024-05-15 19:46:42.624179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.644 [2024-05-15 19:46:42.624540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.624548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.624959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.625241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.625250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.625503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.625880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.625888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.626162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.626560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.626568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.626665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.627013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.627021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.627237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.627436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.627443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.627754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.628141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.628150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.628518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.628872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.628880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.629253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.629612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.629619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.629893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.630297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.630304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.630541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.630896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.630904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.631333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.631555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.631562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.631930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.632023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.632031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.632385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.632670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.632677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.633048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.633404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.633412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.633778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.634131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.634138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.634454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.634815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.634822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.635214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.635554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.635562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.635932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.636324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.636332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.636574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.636925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.636933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.637164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.637381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.637389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.637773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.637992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.637999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.638365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.638699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.638707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.639091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.639502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.639509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.639882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.640245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.640253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.640467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.640833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.640841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.641051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.641416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.641424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.641789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.642028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.642037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.642406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.642616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.642623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.645 qpair failed and we were unable to recover it. 00:31:16.645 [2024-05-15 19:46:42.642976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.643221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.645 [2024-05-15 19:46:42.643230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.643620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.643863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.643872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.644241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.644602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.644610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.644975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.645379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.645390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.645625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.645980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.645988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.646071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.646406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.646414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.646699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.647079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.647087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.647448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.647688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.647696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.647893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.648286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.648295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.648673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.649039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.649048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.649423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.649621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.649628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.650017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.650425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.650433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.650831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.651042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.651049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.651431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.651832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.651840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.652207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.652575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.652583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.652860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.653220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.653227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.653592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.654001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.654009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.654244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.654611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.654619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.654851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.655242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.655252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.655626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.656028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.656036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.656429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.656803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.656812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.656912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.657255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.657262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.657495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.657879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.657887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.658254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.658621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.658629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.659017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.659424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.659432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.659805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.660166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.660174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.660567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.660860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.660869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.661106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.661324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.661331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.661525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.661776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.661786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.662148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.662540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.662549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.646 qpair failed and we were unable to recover it. 00:31:16.646 [2024-05-15 19:46:42.662853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.646 [2024-05-15 19:46:42.663064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.663072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.663462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.663866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.663874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.664106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.664268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.664276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.664664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.665066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.665075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.665275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.665454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.665462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.665807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.666163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.666170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.666399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.666759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.666768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.667179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.667571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.667579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.667949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.668349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.668357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.668792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.669201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.669209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.669570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.669780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.669787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.670249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.670607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.670615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.670993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.671402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.671410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.671786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.672179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.672187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.672567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.672918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.672927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.673278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.673559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.673567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.673842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.674171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.674179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.674549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.674955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.674962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.675195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.675434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.675442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.675813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.676225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.676233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.676434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.676619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.676628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.677001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.677405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.677413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.677778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.678134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.678141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.678308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.678672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.678681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.679072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.679385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.679393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.679771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.680152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.680160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.680554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.680862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.680869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.681241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.681484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.681491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.681857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.682165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.682172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.682542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.682763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.682770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.647 qpair failed and we were unable to recover it. 00:31:16.647 [2024-05-15 19:46:42.683146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.647 [2024-05-15 19:46:42.683541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.683549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.683922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.684322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.684331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.684521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.684886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.684894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.685272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.685667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.685675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.686067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.686468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.686476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.686846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.687087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.687095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.687465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.687714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.687722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.687934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.688265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.688273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.688663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.689066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.689074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.689446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.689849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.689857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.690238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.690610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.690619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.691075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.691434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.691442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.691799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.691867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.691875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.692219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.692591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.692599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.692837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.692995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.693002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.693390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.693679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.693687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.694068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.694469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.694477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.694684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.695057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.695065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.695457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.695853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.695861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.696065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.696326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.696335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.696728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.696937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.696944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.697262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.697661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.697671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.697990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.698244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.698252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.698311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.698468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.698476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.698852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.699056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.699065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.648 qpair failed and we were unable to recover it. 00:31:16.648 [2024-05-15 19:46:42.699451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.648 [2024-05-15 19:46:42.699619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.699626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.700007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.700213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.700220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.700615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.701018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.701026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.701259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.701618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.701626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.701995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.702404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.702413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.702635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.702848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.702856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.703071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.703395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.703403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.703772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.704140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.704148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.704422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.704821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.704830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.705106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.705514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.705522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.705914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.706156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.706164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.706551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.706910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.706918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.707285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.707352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.707358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.707714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.708070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.708077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.708472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.708696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.708704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.709074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.709477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.709485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.709555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.709795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.709803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.710197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.710492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.710501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.710887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.711299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.711307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.711690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.712049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.712057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.712433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.712790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.712798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.713170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.713569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.713577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.713944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.714347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.714357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.714729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.714948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.714956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.715367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.715692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.715700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.716068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.716421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.716430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.716643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.716831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.716839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.717074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.717450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.717458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.717821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.718223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.718231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.718622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.719025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.649 [2024-05-15 19:46:42.719033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.649 qpair failed and we were unable to recover it. 00:31:16.649 [2024-05-15 19:46:42.719395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.719809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.719817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.720218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.720607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.720617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.720815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.721030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.721039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.721285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.721688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.721697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.722074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.722273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.722281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.722644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.722934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.722942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.723321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.723406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.723412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.723602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.724006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.724014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.724392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.724561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.724569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.724941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.725217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.725226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.725627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.725972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.725981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.726353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.726752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.726760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.727135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.727536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.727544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.727934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.728282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.728290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.728493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.728845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.728853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.729223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.729630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.729638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.729997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.730375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.730384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.730777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.731179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.731187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.731587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.731990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.731998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.732365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.732765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.732773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.732991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.733189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.733197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.733399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.733605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.733613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.733951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.734196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.734204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.734414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.734764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.734772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.735090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.735262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.735270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.735470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.735875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.735883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.736245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.736609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.736617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.736997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.737399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.737407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.737646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.737954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.737962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.650 [2024-05-15 19:46:42.738355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.738740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.650 [2024-05-15 19:46:42.738748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.650 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.739161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.739404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.739412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.739481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.739817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.739826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.740193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.740439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.740447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.740634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.740852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.740860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.741227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.741557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.741565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.741970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.742201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.742210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.742653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.743096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.743104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.743502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.743857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.743865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.744237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.744439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.744448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.744774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.745150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.745158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.745522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.745876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.745885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.746124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.746287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.746295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.746486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.746878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.746886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.747293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.747503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.747511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.747731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.747907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.747917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.748267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.748671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.748679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.749050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.749462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.749471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.749844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.750248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.750256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.750626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.750932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.750940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.751325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.751504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.751512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.751901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.752253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.752260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.752463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.752855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.752863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.753062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.753388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.753397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.753786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.754189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.754197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.754538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.754888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.754898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.755213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.755557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.755565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.755937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.756293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.756301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.756690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.757046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.757055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.757430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.757835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.757843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.651 qpair failed and we were unable to recover it. 00:31:16.651 [2024-05-15 19:46:42.758045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.651 [2024-05-15 19:46:42.758392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.758400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.652 qpair failed and we were unable to recover it. 00:31:16.652 [2024-05-15 19:46:42.758757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.759158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.759166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.652 qpair failed and we were unable to recover it. 00:31:16.652 [2024-05-15 19:46:42.759560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.759801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.759808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.652 qpair failed and we were unable to recover it. 00:31:16.652 [2024-05-15 19:46:42.760180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.760585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.760593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.652 qpair failed and we were unable to recover it. 00:31:16.652 [2024-05-15 19:46:42.760965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.761277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.761286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.652 qpair failed and we were unable to recover it. 00:31:16.652 [2024-05-15 19:46:42.761682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.761924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.761935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.652 qpair failed and we were unable to recover it. 00:31:16.652 [2024-05-15 19:46:42.762323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.762703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.762710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.652 qpair failed and we were unable to recover it. 00:31:16.652 [2024-05-15 19:46:42.763088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.763338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.763345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.652 qpair failed and we were unable to recover it. 00:31:16.652 [2024-05-15 19:46:42.763597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.764011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.764018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.652 qpair failed and we were unable to recover it. 00:31:16.652 [2024-05-15 19:46:42.764408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.764764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.764771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.652 qpair failed and we were unable to recover it. 00:31:16.652 [2024-05-15 19:46:42.765129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.765535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.765543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.652 qpair failed and we were unable to recover it. 00:31:16.652 [2024-05-15 19:46:42.765822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.766225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.766232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.652 qpair failed and we were unable to recover it. 00:31:16.652 [2024-05-15 19:46:42.766625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.767030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.767038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.652 qpair failed and we were unable to recover it. 00:31:16.652 [2024-05-15 19:46:42.767441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.767798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.767806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.652 qpair failed and we were unable to recover it. 00:31:16.652 [2024-05-15 19:46:42.768191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.768584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.768591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.652 qpair failed and we were unable to recover it. 00:31:16.652 [2024-05-15 19:46:42.769009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.769179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.769188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.652 qpair failed and we were unable to recover it. 00:31:16.652 [2024-05-15 19:46:42.769543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.769906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.769915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.652 qpair failed and we were unable to recover it. 00:31:16.652 [2024-05-15 19:46:42.770247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.770474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.770483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.652 qpair failed and we were unable to recover it. 00:31:16.652 [2024-05-15 19:46:42.770856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.771209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.771217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.652 qpair failed and we were unable to recover it. 00:31:16.652 [2024-05-15 19:46:42.771604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.771898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.771905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.652 qpair failed and we were unable to recover it. 00:31:16.652 [2024-05-15 19:46:42.772273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.772496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.772504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.652 qpair failed and we were unable to recover it. 00:31:16.652 [2024-05-15 19:46:42.772852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.652 [2024-05-15 19:46:42.773260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.773269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.773659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.774016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.774024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.774393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.774676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.774683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.775053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.775411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.775419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.775833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.776136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.776143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.776538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.776833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.776841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.777073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.777317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.777325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.777381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.777677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.777684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.778041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.778450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.778458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.778828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.779232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.779240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.779654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.779872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.779879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.780101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.780275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.780282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.780659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.780951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.780959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.781157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.781522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.781530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.781895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.782069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.782077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.782285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.782579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.782587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.782992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.783405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.783411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.783812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.784220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.784226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.784428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.784695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.784701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.784980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.785394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.785401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.785619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.785947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.785953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.786329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.786661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.786669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.786908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.787315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.787324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.787583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.787755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.787764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.788100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.788411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.788419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.788777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.789045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.789054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.789287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.789530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.789539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 [2024-05-15 19:46:42.789769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:16.653 [2024-05-15 19:46:42.790172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.790182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.653 qpair failed and we were unable to recover it. 00:31:16.653 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:31:16.653 [2024-05-15 19:46:42.790546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:16.653 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:16.653 [2024-05-15 19:46:42.790952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.653 [2024-05-15 19:46:42.790961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:16.654 [2024-05-15 19:46:42.791217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.791413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.791422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.791741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.792144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.792153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.792522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.792843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.792852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.793224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.793586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.793595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.793954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.794310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.794324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.794538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.794915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.794925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.795162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.795529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.795538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.795740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.795899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.795908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.796251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.796608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.796617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.796816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.797132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.797140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.797511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.797720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.797729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.798110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.798444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.798453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.798820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.799233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.799241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.799617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.800018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.800026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.800399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.800784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.800794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.801153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.801363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.801371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.801610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.801936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.801945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.802312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.802700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.802709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.803079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.803435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.803445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.803774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.804130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.804139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.804507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.804710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.804719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.805089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.805483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.805491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.805922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.806145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.806154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.806533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.806886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.806895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.807293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.807658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.807669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.808037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.808395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.808404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.808634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.808982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.808990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.809222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.809587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.809596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.654 [2024-05-15 19:46:42.809991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.810345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.654 [2024-05-15 19:46:42.810353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.654 qpair failed and we were unable to recover it. 00:31:16.655 [2024-05-15 19:46:42.810775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.655 [2024-05-15 19:46:42.811174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.655 [2024-05-15 19:46:42.811182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.655 qpair failed and we were unable to recover it. 00:31:16.655 [2024-05-15 19:46:42.811578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.655 [2024-05-15 19:46:42.811983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.655 [2024-05-15 19:46:42.811991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.655 qpair failed and we were unable to recover it. 00:31:16.655 [2024-05-15 19:46:42.812306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.655 [2024-05-15 19:46:42.812679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.655 [2024-05-15 19:46:42.812687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.655 qpair failed and we were unable to recover it. 00:31:16.655 [2024-05-15 19:46:42.812901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.655 [2024-05-15 19:46:42.813283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.655 [2024-05-15 19:46:42.813292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.655 qpair failed and we were unable to recover it. 00:31:16.655 [2024-05-15 19:46:42.813738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.655 [2024-05-15 19:46:42.813977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.655 [2024-05-15 19:46:42.813985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.655 qpair failed and we were unable to recover it. 00:31:16.655 [2024-05-15 19:46:42.814197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.655 [2024-05-15 19:46:42.814567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.655 [2024-05-15 19:46:42.814577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.655 qpair failed and we were unable to recover it. 00:31:16.655 [2024-05-15 19:46:42.814946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.655 [2024-05-15 19:46:42.815159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.655 [2024-05-15 19:46:42.815165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.655 qpair failed and we were unable to recover it. 00:31:16.920 [2024-05-15 19:46:42.815552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.815948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.815957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.920 qpair failed and we were unable to recover it. 00:31:16.920 [2024-05-15 19:46:42.816322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.816703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.816711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.920 qpair failed and we were unable to recover it. 00:31:16.920 [2024-05-15 19:46:42.816942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.817348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.817358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.920 qpair failed and we were unable to recover it. 00:31:16.920 [2024-05-15 19:46:42.817763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.817919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.817927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.920 qpair failed and we were unable to recover it. 00:31:16.920 [2024-05-15 19:46:42.818320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.818729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.818737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.920 qpair failed and we were unable to recover it. 00:31:16.920 [2024-05-15 19:46:42.818790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.819107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.819115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.920 qpair failed and we were unable to recover it. 00:31:16.920 [2024-05-15 19:46:42.819486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.819737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.819745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.920 qpair failed and we were unable to recover it. 00:31:16.920 [2024-05-15 19:46:42.820158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.820553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.820561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.920 qpair failed and we were unable to recover it. 00:31:16.920 [2024-05-15 19:46:42.820951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.821363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.821370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.920 qpair failed and we were unable to recover it. 00:31:16.920 [2024-05-15 19:46:42.821606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.821785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.821793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.920 qpair failed and we were unable to recover it. 00:31:16.920 [2024-05-15 19:46:42.822150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.822545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.822554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.920 qpair failed and we were unable to recover it. 00:31:16.920 [2024-05-15 19:46:42.822787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.823157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.823166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.920 qpair failed and we were unable to recover it. 00:31:16.920 [2024-05-15 19:46:42.823367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.823728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.823736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.920 qpair failed and we were unable to recover it. 00:31:16.920 [2024-05-15 19:46:42.824109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.824512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.824520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.920 qpair failed and we were unable to recover it. 00:31:16.920 [2024-05-15 19:46:42.824721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.825071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.825079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.920 qpair failed and we were unable to recover it. 00:31:16.920 [2024-05-15 19:46:42.825447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.825846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.825854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.920 qpair failed and we were unable to recover it. 00:31:16.920 [2024-05-15 19:46:42.826063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.826414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.826422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.920 qpair failed and we were unable to recover it. 00:31:16.920 [2024-05-15 19:46:42.826743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.827148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.827156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.920 qpair failed and we were unable to recover it. 00:31:16.920 [2024-05-15 19:46:42.827523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.827688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.920 [2024-05-15 19:46:42.827696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.920 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.828044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.828444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.828451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.828845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.829251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.829259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.829620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.829833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.829842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:16.921 [2024-05-15 19:46:42.830230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:16.921 [2024-05-15 19:46:42.830545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.830554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.921 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:16.921 [2024-05-15 19:46:42.830954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.831114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.831121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.831255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.831562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.831571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.831945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.832306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.832316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.832509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.832852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.832860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.833228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.833557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.833567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.833960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.834361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.834369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.834604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.834846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.834853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.835223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.835613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.835622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.835990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.836348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.836356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.836721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.837018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.837026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.837397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.837654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.837661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.838038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.838290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.838300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.838667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.838913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.838920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.839310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.839534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.839542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.839857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.840215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.840224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.840584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.840804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.840812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.841188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.841436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.841444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.841692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.842053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.842061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.842433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.842638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.842646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.842998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.843401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.843409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.843789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.844191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.844198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.844432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.844830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.844838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.845074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.845477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.845486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.845864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.846266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.921 [2024-05-15 19:46:42.846274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.921 qpair failed and we were unable to recover it. 00:31:16.921 [2024-05-15 19:46:42.846474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.846800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.846810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.847071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.847434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.847442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.847623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.847983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.847990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.848352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.848613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.848621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 Malloc0 00:31:16.922 [2024-05-15 19:46:42.848985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.849343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.849351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.922 [2024-05-15 19:46:42.849746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:16.922 [2024-05-15 19:46:42.849959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.849967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.922 [2024-05-15 19:46:42.850344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:16.922 [2024-05-15 19:46:42.850683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.850690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.851067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.851472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.851480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.851545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.851886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.851895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.851948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.852146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.852153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.852390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.852800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.852808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.853188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.853590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.853598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.853926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.854281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.854289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.854669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.855074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.855081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.855325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.855704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.855712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.855950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.856155] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:16.922 [2024-05-15 19:46:42.856306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.856326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.856529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.856919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.856927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.857322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.857700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.857707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.858080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.858467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.858475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.858845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.859207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.859215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.859447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.859830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.859838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.860231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.860576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.860584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.860818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.861222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.861230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.861439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.861726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.861734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.861902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.862223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.862231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.862590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.862873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.862880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.863106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.863480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.863488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.922 [2024-05-15 19:46:42.863599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.863951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.922 [2024-05-15 19:46:42.863959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.922 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.864336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.864693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.864701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.864937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.923 [2024-05-15 19:46:42.865340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.865349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:16.923 [2024-05-15 19:46:42.865722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.923 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:16.923 [2024-05-15 19:46:42.866069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.866076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.866448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.866672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.866680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.867051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.867456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.867463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.867689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.868092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.868100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.868472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.868855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.868862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.869067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.869395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.869403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.869786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.870143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.870150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.870384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.870727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.870735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.871105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.871479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.871486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.871731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.872095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.872103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.872381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.872763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.872771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.873166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.873408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.873416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.873782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.874186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.874194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.874528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.874913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.874921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.875192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.875628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.875636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.875992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.876522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.876552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.876992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.923 [2024-05-15 19:46:42.877298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.877306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:16.923 [2024-05-15 19:46:42.877675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.923 [2024-05-15 19:46:42.878084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.878093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:16.923 [2024-05-15 19:46:42.878319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.878845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.878874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.879284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.879780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.879808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.880188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.880523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.880560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.880939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.881305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.881318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.881702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.882105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.882114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.882594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.883028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.883039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.923 qpair failed and we were unable to recover it. 00:31:16.923 [2024-05-15 19:46:42.883520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.883767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.923 [2024-05-15 19:46:42.883777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.924 qpair failed and we were unable to recover it. 00:31:16.924 [2024-05-15 19:46:42.883977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.884355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.884364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.924 qpair failed and we were unable to recover it. 00:31:16.924 [2024-05-15 19:46:42.884789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.884965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.884974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.924 qpair failed and we were unable to recover it. 00:31:16.924 [2024-05-15 19:46:42.885344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.885717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.885726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.924 qpair failed and we were unable to recover it. 00:31:16.924 [2024-05-15 19:46:42.886004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.886361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.886370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.924 qpair failed and we were unable to recover it. 00:31:16.924 [2024-05-15 19:46:42.886774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.886946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.886955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.924 qpair failed and we were unable to recover it. 00:31:16.924 [2024-05-15 19:46:42.887296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.887552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.887562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.924 qpair failed and we were unable to recover it. 00:31:16.924 [2024-05-15 19:46:42.887948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.888351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.888360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.924 qpair failed and we were unable to recover it. 00:31:16.924 [2024-05-15 19:46:42.888734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.888911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.888919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.924 qpair failed and we were unable to recover it. 00:31:16.924 [2024-05-15 19:46:42.888997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.924 [2024-05-15 19:46:42.889285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.889294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.924 qpair failed and we were unable to recover it. 00:31:16.924 [2024-05-15 19:46:42.889543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:16.924 [2024-05-15 19:46:42.889843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.889852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.924 qpair failed and we were unable to recover it. 00:31:16.924 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.924 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:16.924 [2024-05-15 19:46:42.890250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.890597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.890609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.924 qpair failed and we were unable to recover it. 00:31:16.924 [2024-05-15 19:46:42.890824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.891152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.891162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.924 qpair failed and we were unable to recover it. 00:31:16.924 [2024-05-15 19:46:42.891542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.891952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.891961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.924 qpair failed and we were unable to recover it. 00:31:16.924 [2024-05-15 19:46:42.892166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.892537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.892547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.924 qpair failed and we were unable to recover it. 00:31:16.924 [2024-05-15 19:46:42.892764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.893012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.893021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.924 qpair failed and we were unable to recover it. 00:31:16.924 [2024-05-15 19:46:42.893391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.893791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.893799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.924 qpair failed and we were unable to recover it. 00:31:16.924 [2024-05-15 19:46:42.894191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.894555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.894564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.924 qpair failed and we were unable to recover it. 00:31:16.924 [2024-05-15 19:46:42.894774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.895100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.895109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.924 qpair failed and we were unable to recover it. 00:31:16.924 [2024-05-15 19:46:42.895495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.895714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.895723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.924 qpair failed and we were unable to recover it. 00:31:16.924 [2024-05-15 19:46:42.895917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.896216] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:16.924 [2024-05-15 19:46:42.896270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.924 [2024-05-15 19:46:42.896278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9490000b90 with addr=10.0.0.2, port=4420 00:31:16.924 qpair failed and we were unable to recover it. 00:31:16.924 [2024-05-15 19:46:42.896455] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:16.924 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.924 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:16.924 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.924 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:16.924 [2024-05-15 19:46:42.907002] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.925 [2024-05-15 19:46:42.907120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.925 [2024-05-15 19:46:42.907136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.925 [2024-05-15 19:46:42.907142] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.925 [2024-05-15 19:46:42.907147] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:16.925 [2024-05-15 19:46:42.907163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.925 qpair failed and we were unable to recover it. 00:31:16.925 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.925 19:46:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3803363 00:31:16.925 [2024-05-15 19:46:42.916999] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.925 [2024-05-15 19:46:42.917072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.925 [2024-05-15 19:46:42.917086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.925 [2024-05-15 19:46:42.917091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.925 [2024-05-15 19:46:42.917095] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:16.925 [2024-05-15 19:46:42.917107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.925 qpair failed and we were unable to recover it. 00:31:16.925 [2024-05-15 19:46:42.926999] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.925 [2024-05-15 19:46:42.927067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.925 [2024-05-15 19:46:42.927080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.925 [2024-05-15 19:46:42.927086] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.925 [2024-05-15 19:46:42.927090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:16.925 [2024-05-15 19:46:42.927101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.925 qpair failed and we were unable to recover it. 00:31:16.925 [2024-05-15 19:46:42.936964] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.925 [2024-05-15 19:46:42.937046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.925 [2024-05-15 19:46:42.937059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.925 [2024-05-15 19:46:42.937067] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.925 [2024-05-15 19:46:42.937073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:16.925 [2024-05-15 19:46:42.937084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.925 qpair failed and we were unable to recover it. 00:31:16.925 [2024-05-15 19:46:42.946896] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.925 [2024-05-15 19:46:42.946962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.925 [2024-05-15 19:46:42.946975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.925 [2024-05-15 19:46:42.946981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.925 [2024-05-15 19:46:42.946985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:16.925 [2024-05-15 19:46:42.946996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.925 qpair failed and we were unable to recover it. 00:31:16.925 [2024-05-15 19:46:42.957019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.925 [2024-05-15 19:46:42.957077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.925 [2024-05-15 19:46:42.957090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.925 [2024-05-15 19:46:42.957095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.925 [2024-05-15 19:46:42.957100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:16.925 [2024-05-15 19:46:42.957111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.925 qpair failed and we were unable to recover it. 00:31:16.925 [2024-05-15 19:46:42.967078] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.925 [2024-05-15 19:46:42.967167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.925 [2024-05-15 19:46:42.967186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.925 [2024-05-15 19:46:42.967193] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.925 [2024-05-15 19:46:42.967198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:16.925 [2024-05-15 19:46:42.967212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.925 qpair failed and we were unable to recover it. 00:31:16.925 [2024-05-15 19:46:42.977070] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.925 [2024-05-15 19:46:42.977133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.925 [2024-05-15 19:46:42.977147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.925 [2024-05-15 19:46:42.977153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.925 [2024-05-15 19:46:42.977157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:16.925 [2024-05-15 19:46:42.977169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.925 qpair failed and we were unable to recover it. 00:31:16.925 [2024-05-15 19:46:42.987088] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.925 [2024-05-15 19:46:42.987187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.925 [2024-05-15 19:46:42.987200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.925 [2024-05-15 19:46:42.987205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.925 [2024-05-15 19:46:42.987210] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:16.925 [2024-05-15 19:46:42.987221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.925 qpair failed and we were unable to recover it. 00:31:16.925 [2024-05-15 19:46:42.997145] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.925 [2024-05-15 19:46:42.997207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.925 [2024-05-15 19:46:42.997219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.925 [2024-05-15 19:46:42.997224] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.925 [2024-05-15 19:46:42.997229] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:16.925 [2024-05-15 19:46:42.997240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.925 qpair failed and we were unable to recover it. 00:31:16.925 [2024-05-15 19:46:43.007142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.925 [2024-05-15 19:46:43.007230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.925 [2024-05-15 19:46:43.007243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.925 [2024-05-15 19:46:43.007248] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.925 [2024-05-15 19:46:43.007253] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:16.925 [2024-05-15 19:46:43.007263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.925 qpair failed and we were unable to recover it. 00:31:16.925 [2024-05-15 19:46:43.017157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.925 [2024-05-15 19:46:43.017218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.925 [2024-05-15 19:46:43.017230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.925 [2024-05-15 19:46:43.017235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.925 [2024-05-15 19:46:43.017240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:16.925 [2024-05-15 19:46:43.017250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.925 qpair failed and we were unable to recover it. 00:31:16.925 [2024-05-15 19:46:43.027196] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.925 [2024-05-15 19:46:43.027268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.925 [2024-05-15 19:46:43.027281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.925 [2024-05-15 19:46:43.027289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.925 [2024-05-15 19:46:43.027294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:16.925 [2024-05-15 19:46:43.027305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.925 qpair failed and we were unable to recover it. 00:31:16.925 [2024-05-15 19:46:43.037243] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.925 [2024-05-15 19:46:43.037301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.925 [2024-05-15 19:46:43.037317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.925 [2024-05-15 19:46:43.037323] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.925 [2024-05-15 19:46:43.037328] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:16.926 [2024-05-15 19:46:43.037339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.926 qpair failed and we were unable to recover it. 00:31:16.926 [2024-05-15 19:46:43.047253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.926 [2024-05-15 19:46:43.047317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.926 [2024-05-15 19:46:43.047330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.926 [2024-05-15 19:46:43.047335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.926 [2024-05-15 19:46:43.047340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:16.926 [2024-05-15 19:46:43.047351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.926 qpair failed and we were unable to recover it. 00:31:16.926 [2024-05-15 19:46:43.057319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.926 [2024-05-15 19:46:43.057425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.926 [2024-05-15 19:46:43.057438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.926 [2024-05-15 19:46:43.057443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.926 [2024-05-15 19:46:43.057448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:16.926 [2024-05-15 19:46:43.057459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.926 qpair failed and we were unable to recover it. 00:31:16.926 [2024-05-15 19:46:43.067301] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.926 [2024-05-15 19:46:43.067371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.926 [2024-05-15 19:46:43.067384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.926 [2024-05-15 19:46:43.067389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.926 [2024-05-15 19:46:43.067394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:16.926 [2024-05-15 19:46:43.067405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.926 qpair failed and we were unable to recover it. 00:31:16.926 [2024-05-15 19:46:43.077364] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.926 [2024-05-15 19:46:43.077457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.926 [2024-05-15 19:46:43.077470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.926 [2024-05-15 19:46:43.077475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.926 [2024-05-15 19:46:43.077479] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:16.926 [2024-05-15 19:46:43.077490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.926 qpair failed and we were unable to recover it. 00:31:16.926 [2024-05-15 19:46:43.087516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.926 [2024-05-15 19:46:43.087587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.926 [2024-05-15 19:46:43.087600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.926 [2024-05-15 19:46:43.087605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.926 [2024-05-15 19:46:43.087609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:16.926 [2024-05-15 19:46:43.087620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.926 qpair failed and we were unable to recover it. 00:31:16.926 [2024-05-15 19:46:43.097420] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.926 [2024-05-15 19:46:43.097482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.926 [2024-05-15 19:46:43.097494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.926 [2024-05-15 19:46:43.097500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.926 [2024-05-15 19:46:43.097505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:16.926 [2024-05-15 19:46:43.097515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:16.926 qpair failed and we were unable to recover it. 00:31:17.189 [2024-05-15 19:46:43.107504] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.189 [2024-05-15 19:46:43.107575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.189 [2024-05-15 19:46:43.107587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.189 [2024-05-15 19:46:43.107593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.189 [2024-05-15 19:46:43.107597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.189 [2024-05-15 19:46:43.107608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.189 qpair failed and we were unable to recover it. 00:31:17.189 [2024-05-15 19:46:43.117542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.189 [2024-05-15 19:46:43.117623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.189 [2024-05-15 19:46:43.117638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.189 [2024-05-15 19:46:43.117644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.189 [2024-05-15 19:46:43.117649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.189 [2024-05-15 19:46:43.117660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.189 qpair failed and we were unable to recover it. 00:31:17.189 [2024-05-15 19:46:43.127521] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.189 [2024-05-15 19:46:43.127620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.189 [2024-05-15 19:46:43.127633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.189 [2024-05-15 19:46:43.127638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.189 [2024-05-15 19:46:43.127643] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.189 [2024-05-15 19:46:43.127654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.189 qpair failed and we were unable to recover it. 00:31:17.189 [2024-05-15 19:46:43.137504] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.189 [2024-05-15 19:46:43.137565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.189 [2024-05-15 19:46:43.137578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.189 [2024-05-15 19:46:43.137583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.189 [2024-05-15 19:46:43.137588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.189 [2024-05-15 19:46:43.137598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.189 qpair failed and we were unable to recover it. 00:31:17.189 [2024-05-15 19:46:43.147519] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.189 [2024-05-15 19:46:43.147579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.189 [2024-05-15 19:46:43.147592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.189 [2024-05-15 19:46:43.147598] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.189 [2024-05-15 19:46:43.147602] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.189 [2024-05-15 19:46:43.147613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.189 qpair failed and we were unable to recover it. 00:31:17.189 [2024-05-15 19:46:43.157578] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.189 [2024-05-15 19:46:43.157642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.189 [2024-05-15 19:46:43.157655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.189 [2024-05-15 19:46:43.157660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.189 [2024-05-15 19:46:43.157664] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.189 [2024-05-15 19:46:43.157678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.189 qpair failed and we were unable to recover it. 00:31:17.189 [2024-05-15 19:46:43.167662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.189 [2024-05-15 19:46:43.167766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.189 [2024-05-15 19:46:43.167779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.189 [2024-05-15 19:46:43.167784] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.189 [2024-05-15 19:46:43.167789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.189 [2024-05-15 19:46:43.167799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.189 qpair failed and we were unable to recover it. 00:31:17.189 [2024-05-15 19:46:43.177603] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.189 [2024-05-15 19:46:43.177663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.189 [2024-05-15 19:46:43.177676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.189 [2024-05-15 19:46:43.177681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.189 [2024-05-15 19:46:43.177686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.189 [2024-05-15 19:46:43.177697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.189 qpair failed and we were unable to recover it. 00:31:17.189 [2024-05-15 19:46:43.187645] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.189 [2024-05-15 19:46:43.187713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.189 [2024-05-15 19:46:43.187726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.189 [2024-05-15 19:46:43.187731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.189 [2024-05-15 19:46:43.187736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.189 [2024-05-15 19:46:43.187746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.189 qpair failed and we were unable to recover it. 00:31:17.189 [2024-05-15 19:46:43.197664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.189 [2024-05-15 19:46:43.197720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.189 [2024-05-15 19:46:43.197733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.189 [2024-05-15 19:46:43.197738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.189 [2024-05-15 19:46:43.197742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.189 [2024-05-15 19:46:43.197753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.189 qpair failed and we were unable to recover it. 00:31:17.189 [2024-05-15 19:46:43.207622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.190 [2024-05-15 19:46:43.207678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.190 [2024-05-15 19:46:43.207693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.190 [2024-05-15 19:46:43.207699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.190 [2024-05-15 19:46:43.207703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.190 [2024-05-15 19:46:43.207714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.190 qpair failed and we were unable to recover it. 00:31:17.190 [2024-05-15 19:46:43.217608] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.190 [2024-05-15 19:46:43.217667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.190 [2024-05-15 19:46:43.217680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.190 [2024-05-15 19:46:43.217685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.190 [2024-05-15 19:46:43.217689] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.190 [2024-05-15 19:46:43.217700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.190 qpair failed and we were unable to recover it. 00:31:17.190 [2024-05-15 19:46:43.227803] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.190 [2024-05-15 19:46:43.227868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.190 [2024-05-15 19:46:43.227880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.190 [2024-05-15 19:46:43.227885] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.190 [2024-05-15 19:46:43.227889] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.190 [2024-05-15 19:46:43.227900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.190 qpair failed and we were unable to recover it. 00:31:17.190 [2024-05-15 19:46:43.237775] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.190 [2024-05-15 19:46:43.237832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.190 [2024-05-15 19:46:43.237844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.190 [2024-05-15 19:46:43.237849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.190 [2024-05-15 19:46:43.237854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.190 [2024-05-15 19:46:43.237864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.190 qpair failed and we were unable to recover it. 00:31:17.190 [2024-05-15 19:46:43.247793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.190 [2024-05-15 19:46:43.247851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.190 [2024-05-15 19:46:43.247864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.190 [2024-05-15 19:46:43.247869] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.190 [2024-05-15 19:46:43.247876] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.190 [2024-05-15 19:46:43.247887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.190 qpair failed and we were unable to recover it. 00:31:17.190 [2024-05-15 19:46:43.257814] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.190 [2024-05-15 19:46:43.257874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.190 [2024-05-15 19:46:43.257886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.190 [2024-05-15 19:46:43.257891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.190 [2024-05-15 19:46:43.257896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.190 [2024-05-15 19:46:43.257906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.190 qpair failed and we were unable to recover it. 00:31:17.190 [2024-05-15 19:46:43.267863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.190 [2024-05-15 19:46:43.267931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.190 [2024-05-15 19:46:43.267945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.190 [2024-05-15 19:46:43.267950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.190 [2024-05-15 19:46:43.267954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.190 [2024-05-15 19:46:43.267965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.190 qpair failed and we were unable to recover it. 00:31:17.190 [2024-05-15 19:46:43.277868] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.190 [2024-05-15 19:46:43.277926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.190 [2024-05-15 19:46:43.277939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.190 [2024-05-15 19:46:43.277944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.190 [2024-05-15 19:46:43.277948] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.190 [2024-05-15 19:46:43.277959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.190 qpair failed and we were unable to recover it. 00:31:17.190 [2024-05-15 19:46:43.287876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.190 [2024-05-15 19:46:43.287939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.190 [2024-05-15 19:46:43.287952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.190 [2024-05-15 19:46:43.287957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.190 [2024-05-15 19:46:43.287961] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.190 [2024-05-15 19:46:43.287972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.190 qpair failed and we were unable to recover it. 00:31:17.190 [2024-05-15 19:46:43.297911] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.190 [2024-05-15 19:46:43.297976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.190 [2024-05-15 19:46:43.297989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.190 [2024-05-15 19:46:43.297994] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.190 [2024-05-15 19:46:43.297999] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.190 [2024-05-15 19:46:43.298010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.190 qpair failed and we were unable to recover it. 00:31:17.190 [2024-05-15 19:46:43.307939] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.190 [2024-05-15 19:46:43.308002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.190 [2024-05-15 19:46:43.308015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.190 [2024-05-15 19:46:43.308020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.190 [2024-05-15 19:46:43.308024] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.190 [2024-05-15 19:46:43.308035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.190 qpair failed and we were unable to recover it. 00:31:17.190 [2024-05-15 19:46:43.317967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.190 [2024-05-15 19:46:43.318023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.190 [2024-05-15 19:46:43.318036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.190 [2024-05-15 19:46:43.318041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.190 [2024-05-15 19:46:43.318045] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.190 [2024-05-15 19:46:43.318056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.190 qpair failed and we were unable to recover it. 00:31:17.190 [2024-05-15 19:46:43.327871] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.190 [2024-05-15 19:46:43.327942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.190 [2024-05-15 19:46:43.327955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.190 [2024-05-15 19:46:43.327961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.190 [2024-05-15 19:46:43.327965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.190 [2024-05-15 19:46:43.327976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.190 qpair failed and we were unable to recover it. 00:31:17.190 [2024-05-15 19:46:43.338029] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.190 [2024-05-15 19:46:43.338093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.190 [2024-05-15 19:46:43.338105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.190 [2024-05-15 19:46:43.338113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.190 [2024-05-15 19:46:43.338118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.190 [2024-05-15 19:46:43.338129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.190 qpair failed and we were unable to recover it. 00:31:17.191 [2024-05-15 19:46:43.348050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.191 [2024-05-15 19:46:43.348123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.191 [2024-05-15 19:46:43.348142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.191 [2024-05-15 19:46:43.348149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.191 [2024-05-15 19:46:43.348153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.191 [2024-05-15 19:46:43.348167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.191 qpair failed and we were unable to recover it. 00:31:17.191 [2024-05-15 19:46:43.358087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.191 [2024-05-15 19:46:43.358156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.191 [2024-05-15 19:46:43.358175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.191 [2024-05-15 19:46:43.358182] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.191 [2024-05-15 19:46:43.358187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.191 [2024-05-15 19:46:43.358202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.191 qpair failed and we were unable to recover it. 00:31:17.191 [2024-05-15 19:46:43.368107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.191 [2024-05-15 19:46:43.368170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.191 [2024-05-15 19:46:43.368189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.191 [2024-05-15 19:46:43.368195] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.191 [2024-05-15 19:46:43.368200] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.191 [2024-05-15 19:46:43.368214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.191 qpair failed and we were unable to recover it. 00:31:17.453 [2024-05-15 19:46:43.378142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.453 [2024-05-15 19:46:43.378207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.453 [2024-05-15 19:46:43.378221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.453 [2024-05-15 19:46:43.378226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.453 [2024-05-15 19:46:43.378231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.453 [2024-05-15 19:46:43.378243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.453 qpair failed and we were unable to recover it. 00:31:17.453 [2024-05-15 19:46:43.388067] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.453 [2024-05-15 19:46:43.388164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.453 [2024-05-15 19:46:43.388178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.453 [2024-05-15 19:46:43.388183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.453 [2024-05-15 19:46:43.388188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.453 [2024-05-15 19:46:43.388199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.453 qpair failed and we were unable to recover it. 00:31:17.453 [2024-05-15 19:46:43.398209] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.453 [2024-05-15 19:46:43.398268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.453 [2024-05-15 19:46:43.398281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.453 [2024-05-15 19:46:43.398286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.453 [2024-05-15 19:46:43.398290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.453 [2024-05-15 19:46:43.398302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.453 qpair failed and we were unable to recover it. 00:31:17.453 [2024-05-15 19:46:43.408257] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.453 [2024-05-15 19:46:43.408321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.453 [2024-05-15 19:46:43.408338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.453 [2024-05-15 19:46:43.408343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.453 [2024-05-15 19:46:43.408348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.453 [2024-05-15 19:46:43.408359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.453 qpair failed and we were unable to recover it. 00:31:17.453 [2024-05-15 19:46:43.418141] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.453 [2024-05-15 19:46:43.418201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.453 [2024-05-15 19:46:43.418214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.453 [2024-05-15 19:46:43.418219] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.453 [2024-05-15 19:46:43.418224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.453 [2024-05-15 19:46:43.418235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.453 qpair failed and we were unable to recover it. 00:31:17.453 [2024-05-15 19:46:43.428288] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.453 [2024-05-15 19:46:43.428356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.453 [2024-05-15 19:46:43.428369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.453 [2024-05-15 19:46:43.428377] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.453 [2024-05-15 19:46:43.428382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.453 [2024-05-15 19:46:43.428393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.453 qpair failed and we were unable to recover it. 00:31:17.453 [2024-05-15 19:46:43.438318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.453 [2024-05-15 19:46:43.438385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.453 [2024-05-15 19:46:43.438398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.453 [2024-05-15 19:46:43.438403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.453 [2024-05-15 19:46:43.438408] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.453 [2024-05-15 19:46:43.438419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.453 qpair failed and we were unable to recover it. 00:31:17.453 [2024-05-15 19:46:43.448351] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.453 [2024-05-15 19:46:43.448419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.453 [2024-05-15 19:46:43.448431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.453 [2024-05-15 19:46:43.448437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.453 [2024-05-15 19:46:43.448441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.453 [2024-05-15 19:46:43.448452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.453 qpair failed and we were unable to recover it. 00:31:17.453 [2024-05-15 19:46:43.458377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.453 [2024-05-15 19:46:43.458461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.454 [2024-05-15 19:46:43.458474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.454 [2024-05-15 19:46:43.458479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.454 [2024-05-15 19:46:43.458483] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.454 [2024-05-15 19:46:43.458494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.454 qpair failed and we were unable to recover it. 00:31:17.454 [2024-05-15 19:46:43.468405] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.454 [2024-05-15 19:46:43.468474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.454 [2024-05-15 19:46:43.468486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.454 [2024-05-15 19:46:43.468492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.454 [2024-05-15 19:46:43.468496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.454 [2024-05-15 19:46:43.468507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.454 qpair failed and we were unable to recover it. 00:31:17.454 [2024-05-15 19:46:43.478316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.454 [2024-05-15 19:46:43.478374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.454 [2024-05-15 19:46:43.478386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.454 [2024-05-15 19:46:43.478391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.454 [2024-05-15 19:46:43.478396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.454 [2024-05-15 19:46:43.478407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.454 qpair failed and we were unable to recover it. 00:31:17.454 [2024-05-15 19:46:43.488476] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.454 [2024-05-15 19:46:43.488538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.454 [2024-05-15 19:46:43.488550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.454 [2024-05-15 19:46:43.488556] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.454 [2024-05-15 19:46:43.488560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.454 [2024-05-15 19:46:43.488571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.454 qpair failed and we were unable to recover it. 00:31:17.454 [2024-05-15 19:46:43.498535] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.454 [2024-05-15 19:46:43.498596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.454 [2024-05-15 19:46:43.498608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.454 [2024-05-15 19:46:43.498615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.454 [2024-05-15 19:46:43.498619] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.454 [2024-05-15 19:46:43.498630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.454 qpair failed and we were unable to recover it. 00:31:17.454 [2024-05-15 19:46:43.508582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.454 [2024-05-15 19:46:43.508647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.454 [2024-05-15 19:46:43.508659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.454 [2024-05-15 19:46:43.508665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.454 [2024-05-15 19:46:43.508669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.454 [2024-05-15 19:46:43.508681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.454 qpair failed and we were unable to recover it. 00:31:17.454 [2024-05-15 19:46:43.518556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.454 [2024-05-15 19:46:43.518613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.454 [2024-05-15 19:46:43.518628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.454 [2024-05-15 19:46:43.518634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.454 [2024-05-15 19:46:43.518638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.454 [2024-05-15 19:46:43.518649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.454 qpair failed and we were unable to recover it. 00:31:17.454 [2024-05-15 19:46:43.528554] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.454 [2024-05-15 19:46:43.528613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.454 [2024-05-15 19:46:43.528625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.454 [2024-05-15 19:46:43.528631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.454 [2024-05-15 19:46:43.528635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.454 [2024-05-15 19:46:43.528646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.454 qpair failed and we were unable to recover it. 00:31:17.454 [2024-05-15 19:46:43.538613] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.454 [2024-05-15 19:46:43.538693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.454 [2024-05-15 19:46:43.538705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.454 [2024-05-15 19:46:43.538711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.454 [2024-05-15 19:46:43.538715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.454 [2024-05-15 19:46:43.538726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.454 qpair failed and we were unable to recover it. 00:31:17.454 [2024-05-15 19:46:43.548638] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.454 [2024-05-15 19:46:43.548707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.454 [2024-05-15 19:46:43.548720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.454 [2024-05-15 19:46:43.548725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.454 [2024-05-15 19:46:43.548730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.454 [2024-05-15 19:46:43.548741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.454 qpair failed and we were unable to recover it. 00:31:17.454 [2024-05-15 19:46:43.558672] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.454 [2024-05-15 19:46:43.558775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.454 [2024-05-15 19:46:43.558788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.454 [2024-05-15 19:46:43.558793] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.454 [2024-05-15 19:46:43.558798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.454 [2024-05-15 19:46:43.558811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.454 qpair failed and we were unable to recover it. 00:31:17.454 [2024-05-15 19:46:43.568680] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.454 [2024-05-15 19:46:43.568778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.454 [2024-05-15 19:46:43.568791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.454 [2024-05-15 19:46:43.568797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.454 [2024-05-15 19:46:43.568802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.454 [2024-05-15 19:46:43.568812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.454 qpair failed and we were unable to recover it. 00:31:17.454 [2024-05-15 19:46:43.578640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.454 [2024-05-15 19:46:43.578746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.454 [2024-05-15 19:46:43.578759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.454 [2024-05-15 19:46:43.578764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.455 [2024-05-15 19:46:43.578768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.455 [2024-05-15 19:46:43.578779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.455 qpair failed and we were unable to recover it. 00:31:17.455 [2024-05-15 19:46:43.588732] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.455 [2024-05-15 19:46:43.588797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.455 [2024-05-15 19:46:43.588809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.455 [2024-05-15 19:46:43.588814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.455 [2024-05-15 19:46:43.588819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.455 [2024-05-15 19:46:43.588830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.455 qpair failed and we were unable to recover it. 00:31:17.455 [2024-05-15 19:46:43.598758] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.455 [2024-05-15 19:46:43.598814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.455 [2024-05-15 19:46:43.598827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.455 [2024-05-15 19:46:43.598832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.455 [2024-05-15 19:46:43.598836] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.455 [2024-05-15 19:46:43.598847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.455 qpair failed and we were unable to recover it. 00:31:17.455 [2024-05-15 19:46:43.608750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.455 [2024-05-15 19:46:43.608815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.455 [2024-05-15 19:46:43.608831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.455 [2024-05-15 19:46:43.608836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.455 [2024-05-15 19:46:43.608840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.455 [2024-05-15 19:46:43.608851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.455 qpair failed and we were unable to recover it. 00:31:17.455 [2024-05-15 19:46:43.618845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.455 [2024-05-15 19:46:43.618905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.455 [2024-05-15 19:46:43.618918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.455 [2024-05-15 19:46:43.618923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.455 [2024-05-15 19:46:43.618927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.455 [2024-05-15 19:46:43.618938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.455 qpair failed and we were unable to recover it. 00:31:17.455 [2024-05-15 19:46:43.628802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.455 [2024-05-15 19:46:43.628897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.455 [2024-05-15 19:46:43.628910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.455 [2024-05-15 19:46:43.628916] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.455 [2024-05-15 19:46:43.628921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.455 [2024-05-15 19:46:43.628931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.455 qpair failed and we were unable to recover it. 00:31:17.717 [2024-05-15 19:46:43.638933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.717 [2024-05-15 19:46:43.638993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.717 [2024-05-15 19:46:43.639005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.717 [2024-05-15 19:46:43.639010] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.717 [2024-05-15 19:46:43.639015] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.717 [2024-05-15 19:46:43.639026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.717 qpair failed and we were unable to recover it. 00:31:17.717 [2024-05-15 19:46:43.648919] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.717 [2024-05-15 19:46:43.649019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.717 [2024-05-15 19:46:43.649032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.717 [2024-05-15 19:46:43.649037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.717 [2024-05-15 19:46:43.649045] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.717 [2024-05-15 19:46:43.649057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.717 qpair failed and we were unable to recover it. 00:31:17.717 [2024-05-15 19:46:43.658938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.717 [2024-05-15 19:46:43.658999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.717 [2024-05-15 19:46:43.659011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.717 [2024-05-15 19:46:43.659017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.717 [2024-05-15 19:46:43.659022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.717 [2024-05-15 19:46:43.659033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.717 qpair failed and we were unable to recover it. 00:31:17.717 [2024-05-15 19:46:43.668957] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.717 [2024-05-15 19:46:43.669025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.717 [2024-05-15 19:46:43.669037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.717 [2024-05-15 19:46:43.669043] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.717 [2024-05-15 19:46:43.669048] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.717 [2024-05-15 19:46:43.669058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.717 qpair failed and we were unable to recover it. 00:31:17.717 [2024-05-15 19:46:43.679027] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.717 [2024-05-15 19:46:43.679091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.717 [2024-05-15 19:46:43.679116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.717 [2024-05-15 19:46:43.679122] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.717 [2024-05-15 19:46:43.679128] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.717 [2024-05-15 19:46:43.679145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.717 qpair failed and we were unable to recover it. 00:31:17.717 [2024-05-15 19:46:43.689012] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.717 [2024-05-15 19:46:43.689072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.717 [2024-05-15 19:46:43.689092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.717 [2024-05-15 19:46:43.689098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.717 [2024-05-15 19:46:43.689103] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.717 [2024-05-15 19:46:43.689117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.717 qpair failed and we were unable to recover it. 00:31:17.717 [2024-05-15 19:46:43.699066] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.717 [2024-05-15 19:46:43.699140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.717 [2024-05-15 19:46:43.699158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.717 [2024-05-15 19:46:43.699165] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.717 [2024-05-15 19:46:43.699170] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.717 [2024-05-15 19:46:43.699184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.717 qpair failed and we were unable to recover it. 00:31:17.717 [2024-05-15 19:46:43.708976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.717 [2024-05-15 19:46:43.709049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.717 [2024-05-15 19:46:43.709064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.718 [2024-05-15 19:46:43.709069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.718 [2024-05-15 19:46:43.709074] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.718 [2024-05-15 19:46:43.709085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.718 qpair failed and we were unable to recover it. 00:31:17.718 [2024-05-15 19:46:43.719119] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.718 [2024-05-15 19:46:43.719178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.718 [2024-05-15 19:46:43.719191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.718 [2024-05-15 19:46:43.719196] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.718 [2024-05-15 19:46:43.719200] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.718 [2024-05-15 19:46:43.719212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.718 qpair failed and we were unable to recover it. 00:31:17.718 [2024-05-15 19:46:43.729184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.718 [2024-05-15 19:46:43.729250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.718 [2024-05-15 19:46:43.729263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.718 [2024-05-15 19:46:43.729268] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.718 [2024-05-15 19:46:43.729273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.718 [2024-05-15 19:46:43.729284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.718 qpair failed and we were unable to recover it. 00:31:17.718 [2024-05-15 19:46:43.739162] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.718 [2024-05-15 19:46:43.739221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.718 [2024-05-15 19:46:43.739234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.718 [2024-05-15 19:46:43.739239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.718 [2024-05-15 19:46:43.739248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.718 [2024-05-15 19:46:43.739259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.718 qpair failed and we were unable to recover it. 00:31:17.718 [2024-05-15 19:46:43.749241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.718 [2024-05-15 19:46:43.749348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.718 [2024-05-15 19:46:43.749362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.718 [2024-05-15 19:46:43.749367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.718 [2024-05-15 19:46:43.749371] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.718 [2024-05-15 19:46:43.749382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.718 qpair failed and we were unable to recover it. 00:31:17.718 [2024-05-15 19:46:43.759257] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.718 [2024-05-15 19:46:43.759317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.718 [2024-05-15 19:46:43.759330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.718 [2024-05-15 19:46:43.759335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.718 [2024-05-15 19:46:43.759339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.718 [2024-05-15 19:46:43.759351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.718 qpair failed and we were unable to recover it. 00:31:17.718 [2024-05-15 19:46:43.769270] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.718 [2024-05-15 19:46:43.769331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.718 [2024-05-15 19:46:43.769344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.718 [2024-05-15 19:46:43.769349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.718 [2024-05-15 19:46:43.769354] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.718 [2024-05-15 19:46:43.769365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.718 qpair failed and we were unable to recover it. 00:31:17.718 [2024-05-15 19:46:43.779337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.718 [2024-05-15 19:46:43.779399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.718 [2024-05-15 19:46:43.779412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.718 [2024-05-15 19:46:43.779418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.718 [2024-05-15 19:46:43.779423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.718 [2024-05-15 19:46:43.779434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.718 qpair failed and we were unable to recover it. 00:31:17.718 [2024-05-15 19:46:43.789339] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.718 [2024-05-15 19:46:43.789400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.718 [2024-05-15 19:46:43.789413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.718 [2024-05-15 19:46:43.789418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.718 [2024-05-15 19:46:43.789422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.718 [2024-05-15 19:46:43.789433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.718 qpair failed and we were unable to recover it. 00:31:17.718 [2024-05-15 19:46:43.799418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.718 [2024-05-15 19:46:43.799480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.718 [2024-05-15 19:46:43.799493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.718 [2024-05-15 19:46:43.799498] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.718 [2024-05-15 19:46:43.799503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.718 [2024-05-15 19:46:43.799514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.718 qpair failed and we were unable to recover it. 00:31:17.718 [2024-05-15 19:46:43.809387] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.718 [2024-05-15 19:46:43.809457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.718 [2024-05-15 19:46:43.809470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.718 [2024-05-15 19:46:43.809475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.718 [2024-05-15 19:46:43.809480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.718 [2024-05-15 19:46:43.809491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.718 qpair failed and we were unable to recover it. 00:31:17.718 [2024-05-15 19:46:43.819424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.718 [2024-05-15 19:46:43.819484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.718 [2024-05-15 19:46:43.819496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.718 [2024-05-15 19:46:43.819501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.718 [2024-05-15 19:46:43.819506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.718 [2024-05-15 19:46:43.819517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.718 qpair failed and we were unable to recover it. 00:31:17.718 [2024-05-15 19:46:43.829457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.718 [2024-05-15 19:46:43.829518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.718 [2024-05-15 19:46:43.829531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.718 [2024-05-15 19:46:43.829539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.718 [2024-05-15 19:46:43.829544] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.718 [2024-05-15 19:46:43.829555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.718 qpair failed and we were unable to recover it. 00:31:17.718 [2024-05-15 19:46:43.839467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.718 [2024-05-15 19:46:43.839528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.718 [2024-05-15 19:46:43.839540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.718 [2024-05-15 19:46:43.839545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.718 [2024-05-15 19:46:43.839550] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.718 [2024-05-15 19:46:43.839561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.718 qpair failed and we were unable to recover it. 00:31:17.718 [2024-05-15 19:46:43.849496] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.718 [2024-05-15 19:46:43.849557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.719 [2024-05-15 19:46:43.849569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.719 [2024-05-15 19:46:43.849574] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.719 [2024-05-15 19:46:43.849579] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.719 [2024-05-15 19:46:43.849589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.719 qpair failed and we were unable to recover it. 00:31:17.719 [2024-05-15 19:46:43.859514] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.719 [2024-05-15 19:46:43.859574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.719 [2024-05-15 19:46:43.859587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.719 [2024-05-15 19:46:43.859593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.719 [2024-05-15 19:46:43.859597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.719 [2024-05-15 19:46:43.859608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.719 qpair failed and we were unable to recover it. 00:31:17.719 [2024-05-15 19:46:43.869564] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.719 [2024-05-15 19:46:43.869674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.719 [2024-05-15 19:46:43.869687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.719 [2024-05-15 19:46:43.869692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.719 [2024-05-15 19:46:43.869696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.719 [2024-05-15 19:46:43.869707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.719 qpair failed and we were unable to recover it. 00:31:17.719 [2024-05-15 19:46:43.879595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.719 [2024-05-15 19:46:43.879654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.719 [2024-05-15 19:46:43.879667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.719 [2024-05-15 19:46:43.879672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.719 [2024-05-15 19:46:43.879676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.719 [2024-05-15 19:46:43.879687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.719 qpair failed and we were unable to recover it. 00:31:17.719 [2024-05-15 19:46:43.889604] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.719 [2024-05-15 19:46:43.889664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.719 [2024-05-15 19:46:43.889677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.719 [2024-05-15 19:46:43.889682] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.719 [2024-05-15 19:46:43.889686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.719 [2024-05-15 19:46:43.889697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.719 qpair failed and we were unable to recover it. 00:31:17.719 [2024-05-15 19:46:43.899662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.719 [2024-05-15 19:46:43.899832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.719 [2024-05-15 19:46:43.899845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.719 [2024-05-15 19:46:43.899850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.719 [2024-05-15 19:46:43.899855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.719 [2024-05-15 19:46:43.899865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.719 qpair failed and we were unable to recover it. 00:31:17.996 [2024-05-15 19:46:43.909671] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.996 [2024-05-15 19:46:43.909750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.996 [2024-05-15 19:46:43.909763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.996 [2024-05-15 19:46:43.909768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.996 [2024-05-15 19:46:43.909772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.996 [2024-05-15 19:46:43.909783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.996 qpair failed and we were unable to recover it. 00:31:17.996 [2024-05-15 19:46:43.919666] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.996 [2024-05-15 19:46:43.919723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.996 [2024-05-15 19:46:43.919739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.996 [2024-05-15 19:46:43.919744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.996 [2024-05-15 19:46:43.919748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.996 [2024-05-15 19:46:43.919759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.996 qpair failed and we were unable to recover it. 00:31:17.996 [2024-05-15 19:46:43.929597] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.996 [2024-05-15 19:46:43.929663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.996 [2024-05-15 19:46:43.929676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.996 [2024-05-15 19:46:43.929682] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.996 [2024-05-15 19:46:43.929686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.996 [2024-05-15 19:46:43.929697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.996 qpair failed and we were unable to recover it. 00:31:17.996 [2024-05-15 19:46:43.939748] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.996 [2024-05-15 19:46:43.939808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.996 [2024-05-15 19:46:43.939821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.996 [2024-05-15 19:46:43.939826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.996 [2024-05-15 19:46:43.939831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.996 [2024-05-15 19:46:43.939842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.996 qpair failed and we were unable to recover it. 00:31:17.996 [2024-05-15 19:46:43.949769] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.996 [2024-05-15 19:46:43.949833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.996 [2024-05-15 19:46:43.949846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.996 [2024-05-15 19:46:43.949851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.996 [2024-05-15 19:46:43.949856] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.996 [2024-05-15 19:46:43.949867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.996 qpair failed and we were unable to recover it. 00:31:17.996 [2024-05-15 19:46:43.959735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.996 [2024-05-15 19:46:43.959798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.996 [2024-05-15 19:46:43.959810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.996 [2024-05-15 19:46:43.959816] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.996 [2024-05-15 19:46:43.959820] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.996 [2024-05-15 19:46:43.959834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.996 qpair failed and we were unable to recover it. 00:31:17.996 [2024-05-15 19:46:43.969822] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.996 [2024-05-15 19:46:43.969885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.996 [2024-05-15 19:46:43.969898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.996 [2024-05-15 19:46:43.969903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.996 [2024-05-15 19:46:43.969907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.996 [2024-05-15 19:46:43.969918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.996 qpair failed and we were unable to recover it. 00:31:17.996 [2024-05-15 19:46:43.979861] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.996 [2024-05-15 19:46:43.979923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.996 [2024-05-15 19:46:43.979935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.996 [2024-05-15 19:46:43.979941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.996 [2024-05-15 19:46:43.979945] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.996 [2024-05-15 19:46:43.979956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.996 qpair failed and we were unable to recover it. 00:31:17.996 [2024-05-15 19:46:43.989894] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.996 [2024-05-15 19:46:43.989957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.996 [2024-05-15 19:46:43.989969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.996 [2024-05-15 19:46:43.989974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.996 [2024-05-15 19:46:43.989979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.996 [2024-05-15 19:46:43.989990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.996 qpair failed and we were unable to recover it. 00:31:17.996 [2024-05-15 19:46:43.999913] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.996 [2024-05-15 19:46:43.999971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.996 [2024-05-15 19:46:43.999983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.996 [2024-05-15 19:46:43.999988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.996 [2024-05-15 19:46:43.999993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.996 [2024-05-15 19:46:44.000003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.996 qpair failed and we were unable to recover it. 00:31:17.996 [2024-05-15 19:46:44.009942] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.996 [2024-05-15 19:46:44.010024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.996 [2024-05-15 19:46:44.010039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.997 [2024-05-15 19:46:44.010045] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.997 [2024-05-15 19:46:44.010050] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.997 [2024-05-15 19:46:44.010060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.997 qpair failed and we were unable to recover it. 00:31:17.997 [2024-05-15 19:46:44.019963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.997 [2024-05-15 19:46:44.020029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.997 [2024-05-15 19:46:44.020041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.997 [2024-05-15 19:46:44.020047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.997 [2024-05-15 19:46:44.020051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.997 [2024-05-15 19:46:44.020062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.997 qpair failed and we were unable to recover it. 00:31:17.997 [2024-05-15 19:46:44.029928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.997 [2024-05-15 19:46:44.029996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.997 [2024-05-15 19:46:44.030016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.997 [2024-05-15 19:46:44.030022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.997 [2024-05-15 19:46:44.030027] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.997 [2024-05-15 19:46:44.030041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.997 qpair failed and we were unable to recover it. 00:31:17.997 [2024-05-15 19:46:44.040021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.997 [2024-05-15 19:46:44.040087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.997 [2024-05-15 19:46:44.040106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.997 [2024-05-15 19:46:44.040112] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.997 [2024-05-15 19:46:44.040117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.997 [2024-05-15 19:46:44.040131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.997 qpair failed and we were unable to recover it. 00:31:17.997 [2024-05-15 19:46:44.050035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.997 [2024-05-15 19:46:44.050116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.997 [2024-05-15 19:46:44.050131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.997 [2024-05-15 19:46:44.050136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.997 [2024-05-15 19:46:44.050145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.997 [2024-05-15 19:46:44.050157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.997 qpair failed and we were unable to recover it. 00:31:17.997 [2024-05-15 19:46:44.060123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.997 [2024-05-15 19:46:44.060186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.997 [2024-05-15 19:46:44.060199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.997 [2024-05-15 19:46:44.060205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.997 [2024-05-15 19:46:44.060209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.997 [2024-05-15 19:46:44.060221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.997 qpair failed and we were unable to recover it. 00:31:17.997 [2024-05-15 19:46:44.070154] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.997 [2024-05-15 19:46:44.070225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.997 [2024-05-15 19:46:44.070237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.997 [2024-05-15 19:46:44.070243] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.997 [2024-05-15 19:46:44.070247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.997 [2024-05-15 19:46:44.070258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.997 qpair failed and we were unable to recover it. 00:31:17.997 [2024-05-15 19:46:44.080053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.997 [2024-05-15 19:46:44.080109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.997 [2024-05-15 19:46:44.080122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.997 [2024-05-15 19:46:44.080127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.997 [2024-05-15 19:46:44.080132] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.997 [2024-05-15 19:46:44.080143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.997 qpair failed and we were unable to recover it. 00:31:17.997 [2024-05-15 19:46:44.090181] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.997 [2024-05-15 19:46:44.090241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.997 [2024-05-15 19:46:44.090254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.997 [2024-05-15 19:46:44.090259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.997 [2024-05-15 19:46:44.090264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.997 [2024-05-15 19:46:44.090275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.997 qpair failed and we were unable to recover it. 00:31:17.997 [2024-05-15 19:46:44.100242] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.997 [2024-05-15 19:46:44.100320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.997 [2024-05-15 19:46:44.100333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.997 [2024-05-15 19:46:44.100338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.997 [2024-05-15 19:46:44.100343] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.997 [2024-05-15 19:46:44.100354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.997 qpair failed and we were unable to recover it. 00:31:17.997 [2024-05-15 19:46:44.110247] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.997 [2024-05-15 19:46:44.110322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.997 [2024-05-15 19:46:44.110335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.997 [2024-05-15 19:46:44.110340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.997 [2024-05-15 19:46:44.110345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.997 [2024-05-15 19:46:44.110356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.997 qpair failed and we were unable to recover it. 00:31:17.997 [2024-05-15 19:46:44.120263] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.997 [2024-05-15 19:46:44.120364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.997 [2024-05-15 19:46:44.120378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.997 [2024-05-15 19:46:44.120383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.997 [2024-05-15 19:46:44.120387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.997 [2024-05-15 19:46:44.120398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.997 qpair failed and we were unable to recover it. 00:31:17.997 [2024-05-15 19:46:44.130273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.997 [2024-05-15 19:46:44.130335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.997 [2024-05-15 19:46:44.130347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.997 [2024-05-15 19:46:44.130353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.997 [2024-05-15 19:46:44.130358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.997 [2024-05-15 19:46:44.130369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.997 qpair failed and we were unable to recover it. 00:31:17.997 [2024-05-15 19:46:44.140311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.997 [2024-05-15 19:46:44.140378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.997 [2024-05-15 19:46:44.140391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.997 [2024-05-15 19:46:44.140396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.997 [2024-05-15 19:46:44.140406] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.997 [2024-05-15 19:46:44.140418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.997 qpair failed and we were unable to recover it. 00:31:17.997 [2024-05-15 19:46:44.150430] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.998 [2024-05-15 19:46:44.150499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.998 [2024-05-15 19:46:44.150512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.998 [2024-05-15 19:46:44.150517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.998 [2024-05-15 19:46:44.150521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.998 [2024-05-15 19:46:44.150532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.998 qpair failed and we were unable to recover it. 00:31:17.998 [2024-05-15 19:46:44.160376] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.998 [2024-05-15 19:46:44.160464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.998 [2024-05-15 19:46:44.160476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.998 [2024-05-15 19:46:44.160481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.998 [2024-05-15 19:46:44.160486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.998 [2024-05-15 19:46:44.160497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.998 qpair failed and we were unable to recover it. 00:31:17.998 [2024-05-15 19:46:44.170400] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.998 [2024-05-15 19:46:44.170457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.998 [2024-05-15 19:46:44.170471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.998 [2024-05-15 19:46:44.170476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.998 [2024-05-15 19:46:44.170480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:17.998 [2024-05-15 19:46:44.170494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:17.998 qpair failed and we were unable to recover it. 00:31:18.260 [2024-05-15 19:46:44.180316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.260 [2024-05-15 19:46:44.180375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.260 [2024-05-15 19:46:44.180388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.260 [2024-05-15 19:46:44.180393] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.260 [2024-05-15 19:46:44.180398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.260 [2024-05-15 19:46:44.180409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.260 qpair failed and we were unable to recover it. 00:31:18.260 [2024-05-15 19:46:44.190503] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.260 [2024-05-15 19:46:44.190610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.260 [2024-05-15 19:46:44.190623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.260 [2024-05-15 19:46:44.190628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.260 [2024-05-15 19:46:44.190633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.260 [2024-05-15 19:46:44.190643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.260 qpair failed and we were unable to recover it. 00:31:18.260 [2024-05-15 19:46:44.200475] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.260 [2024-05-15 19:46:44.200535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.260 [2024-05-15 19:46:44.200548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.260 [2024-05-15 19:46:44.200553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.261 [2024-05-15 19:46:44.200557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.261 [2024-05-15 19:46:44.200568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.261 qpair failed and we were unable to recover it. 00:31:18.261 [2024-05-15 19:46:44.210501] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.261 [2024-05-15 19:46:44.210584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.261 [2024-05-15 19:46:44.210597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.261 [2024-05-15 19:46:44.210603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.261 [2024-05-15 19:46:44.210608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.261 [2024-05-15 19:46:44.210619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.261 qpair failed and we were unable to recover it. 00:31:18.261 [2024-05-15 19:46:44.220426] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.261 [2024-05-15 19:46:44.220485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.261 [2024-05-15 19:46:44.220498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.261 [2024-05-15 19:46:44.220503] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.261 [2024-05-15 19:46:44.220507] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.261 [2024-05-15 19:46:44.220518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.261 qpair failed and we were unable to recover it. 00:31:18.261 [2024-05-15 19:46:44.230528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.261 [2024-05-15 19:46:44.230598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.261 [2024-05-15 19:46:44.230611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.261 [2024-05-15 19:46:44.230622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.261 [2024-05-15 19:46:44.230627] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.261 [2024-05-15 19:46:44.230637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.261 qpair failed and we were unable to recover it. 00:31:18.261 [2024-05-15 19:46:44.240524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.261 [2024-05-15 19:46:44.240633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.261 [2024-05-15 19:46:44.240646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.261 [2024-05-15 19:46:44.240652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.261 [2024-05-15 19:46:44.240656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.261 [2024-05-15 19:46:44.240667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.261 qpair failed and we were unable to recover it. 00:31:18.261 [2024-05-15 19:46:44.250602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.261 [2024-05-15 19:46:44.250666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.261 [2024-05-15 19:46:44.250679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.261 [2024-05-15 19:46:44.250684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.261 [2024-05-15 19:46:44.250688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.261 [2024-05-15 19:46:44.250699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.261 qpair failed and we were unable to recover it. 00:31:18.261 [2024-05-15 19:46:44.260637] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.261 [2024-05-15 19:46:44.260698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.261 [2024-05-15 19:46:44.260711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.261 [2024-05-15 19:46:44.260716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.261 [2024-05-15 19:46:44.260720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.261 [2024-05-15 19:46:44.260731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.261 qpair failed and we were unable to recover it. 00:31:18.261 [2024-05-15 19:46:44.270662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.261 [2024-05-15 19:46:44.270727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.261 [2024-05-15 19:46:44.270739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.261 [2024-05-15 19:46:44.270744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.261 [2024-05-15 19:46:44.270748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.261 [2024-05-15 19:46:44.270759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.261 qpair failed and we were unable to recover it. 00:31:18.261 [2024-05-15 19:46:44.280677] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.261 [2024-05-15 19:46:44.280731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.261 [2024-05-15 19:46:44.280743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.261 [2024-05-15 19:46:44.280748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.261 [2024-05-15 19:46:44.280753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.261 [2024-05-15 19:46:44.280763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.261 qpair failed and we were unable to recover it. 00:31:18.261 [2024-05-15 19:46:44.290717] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.261 [2024-05-15 19:46:44.290777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.261 [2024-05-15 19:46:44.290789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.261 [2024-05-15 19:46:44.290794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.261 [2024-05-15 19:46:44.290799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.261 [2024-05-15 19:46:44.290809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.261 qpair failed and we were unable to recover it. 00:31:18.261 [2024-05-15 19:46:44.300749] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.261 [2024-05-15 19:46:44.300845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.261 [2024-05-15 19:46:44.300857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.261 [2024-05-15 19:46:44.300862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.261 [2024-05-15 19:46:44.300867] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.261 [2024-05-15 19:46:44.300878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.261 qpair failed and we were unable to recover it. 00:31:18.261 [2024-05-15 19:46:44.310767] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.261 [2024-05-15 19:46:44.310834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.261 [2024-05-15 19:46:44.310846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.261 [2024-05-15 19:46:44.310851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.261 [2024-05-15 19:46:44.310856] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.261 [2024-05-15 19:46:44.310866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.261 qpair failed and we were unable to recover it. 00:31:18.261 [2024-05-15 19:46:44.320793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.261 [2024-05-15 19:46:44.320853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.261 [2024-05-15 19:46:44.320867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.261 [2024-05-15 19:46:44.320873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.261 [2024-05-15 19:46:44.320877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.261 [2024-05-15 19:46:44.320888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.261 qpair failed and we were unable to recover it. 00:31:18.261 [2024-05-15 19:46:44.330817] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.261 [2024-05-15 19:46:44.330876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.261 [2024-05-15 19:46:44.330888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.261 [2024-05-15 19:46:44.330893] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.261 [2024-05-15 19:46:44.330898] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.261 [2024-05-15 19:46:44.330909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.261 qpair failed and we were unable to recover it. 00:31:18.262 [2024-05-15 19:46:44.340889] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.262 [2024-05-15 19:46:44.341000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.262 [2024-05-15 19:46:44.341013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.262 [2024-05-15 19:46:44.341018] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.262 [2024-05-15 19:46:44.341022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.262 [2024-05-15 19:46:44.341033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.262 qpair failed and we were unable to recover it. 00:31:18.262 [2024-05-15 19:46:44.350883] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.262 [2024-05-15 19:46:44.350946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.262 [2024-05-15 19:46:44.350959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.262 [2024-05-15 19:46:44.350964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.262 [2024-05-15 19:46:44.350969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.262 [2024-05-15 19:46:44.350979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.262 qpair failed and we were unable to recover it. 00:31:18.262 [2024-05-15 19:46:44.360919] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.262 [2024-05-15 19:46:44.361011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.262 [2024-05-15 19:46:44.361030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.262 [2024-05-15 19:46:44.361037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.262 [2024-05-15 19:46:44.361042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.262 [2024-05-15 19:46:44.361059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.262 qpair failed and we were unable to recover it. 00:31:18.262 [2024-05-15 19:46:44.370976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.262 [2024-05-15 19:46:44.371056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.262 [2024-05-15 19:46:44.371076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.262 [2024-05-15 19:46:44.371083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.262 [2024-05-15 19:46:44.371087] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.262 [2024-05-15 19:46:44.371101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.262 qpair failed and we were unable to recover it. 00:31:18.262 [2024-05-15 19:46:44.380969] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.262 [2024-05-15 19:46:44.381037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.262 [2024-05-15 19:46:44.381056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.262 [2024-05-15 19:46:44.381063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.262 [2024-05-15 19:46:44.381067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.262 [2024-05-15 19:46:44.381082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.262 qpair failed and we were unable to recover it. 00:31:18.262 [2024-05-15 19:46:44.391004] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.262 [2024-05-15 19:46:44.391071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.262 [2024-05-15 19:46:44.391085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.262 [2024-05-15 19:46:44.391090] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.262 [2024-05-15 19:46:44.391095] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.262 [2024-05-15 19:46:44.391106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.262 qpair failed and we were unable to recover it. 00:31:18.262 [2024-05-15 19:46:44.401046] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.262 [2024-05-15 19:46:44.401107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.262 [2024-05-15 19:46:44.401120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.262 [2024-05-15 19:46:44.401125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.262 [2024-05-15 19:46:44.401130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.262 [2024-05-15 19:46:44.401141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.262 qpair failed and we were unable to recover it. 00:31:18.262 [2024-05-15 19:46:44.411068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.262 [2024-05-15 19:46:44.411122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.262 [2024-05-15 19:46:44.411139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.262 [2024-05-15 19:46:44.411144] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.262 [2024-05-15 19:46:44.411149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.262 [2024-05-15 19:46:44.411160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.262 qpair failed and we were unable to recover it. 00:31:18.262 [2024-05-15 19:46:44.421148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.262 [2024-05-15 19:46:44.421209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.262 [2024-05-15 19:46:44.421221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.262 [2024-05-15 19:46:44.421227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.262 [2024-05-15 19:46:44.421231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.262 [2024-05-15 19:46:44.421242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.262 qpair failed and we were unable to recover it. 00:31:18.262 [2024-05-15 19:46:44.431012] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.262 [2024-05-15 19:46:44.431078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.262 [2024-05-15 19:46:44.431091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.262 [2024-05-15 19:46:44.431096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.262 [2024-05-15 19:46:44.431101] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.262 [2024-05-15 19:46:44.431112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.262 qpair failed and we were unable to recover it. 00:31:18.262 [2024-05-15 19:46:44.441120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.262 [2024-05-15 19:46:44.441184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.262 [2024-05-15 19:46:44.441197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.262 [2024-05-15 19:46:44.441202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.262 [2024-05-15 19:46:44.441207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.262 [2024-05-15 19:46:44.441218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.262 qpair failed and we were unable to recover it. 00:31:18.525 [2024-05-15 19:46:44.451196] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.525 [2024-05-15 19:46:44.451263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.525 [2024-05-15 19:46:44.451276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.525 [2024-05-15 19:46:44.451281] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.525 [2024-05-15 19:46:44.451285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.525 [2024-05-15 19:46:44.451300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.525 qpair failed and we were unable to recover it. 00:31:18.525 [2024-05-15 19:46:44.461199] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.525 [2024-05-15 19:46:44.461258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.525 [2024-05-15 19:46:44.461270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.525 [2024-05-15 19:46:44.461276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.525 [2024-05-15 19:46:44.461280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.525 [2024-05-15 19:46:44.461291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.525 qpair failed and we were unable to recover it. 00:31:18.525 [2024-05-15 19:46:44.471305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.525 [2024-05-15 19:46:44.471411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.525 [2024-05-15 19:46:44.471424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.525 [2024-05-15 19:46:44.471429] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.525 [2024-05-15 19:46:44.471434] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.525 [2024-05-15 19:46:44.471445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.525 qpair failed and we were unable to recover it. 00:31:18.525 [2024-05-15 19:46:44.481228] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.525 [2024-05-15 19:46:44.481292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.525 [2024-05-15 19:46:44.481305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.525 [2024-05-15 19:46:44.481310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.525 [2024-05-15 19:46:44.481319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.525 [2024-05-15 19:46:44.481330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.525 qpair failed and we were unable to recover it. 00:31:18.525 [2024-05-15 19:46:44.491285] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.525 [2024-05-15 19:46:44.491346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.525 [2024-05-15 19:46:44.491359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.525 [2024-05-15 19:46:44.491365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.525 [2024-05-15 19:46:44.491369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.525 [2024-05-15 19:46:44.491380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.525 qpair failed and we were unable to recover it. 00:31:18.525 [2024-05-15 19:46:44.501206] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.525 [2024-05-15 19:46:44.501269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.525 [2024-05-15 19:46:44.501281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.525 [2024-05-15 19:46:44.501287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.525 [2024-05-15 19:46:44.501291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.525 [2024-05-15 19:46:44.501302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.525 qpair failed and we were unable to recover it. 00:31:18.525 [2024-05-15 19:46:44.511345] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.525 [2024-05-15 19:46:44.511410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.525 [2024-05-15 19:46:44.511422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.525 [2024-05-15 19:46:44.511428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.525 [2024-05-15 19:46:44.511433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.525 [2024-05-15 19:46:44.511443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.525 qpair failed and we were unable to recover it. 00:31:18.525 [2024-05-15 19:46:44.521362] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.525 [2024-05-15 19:46:44.521424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.525 [2024-05-15 19:46:44.521437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.525 [2024-05-15 19:46:44.521442] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.525 [2024-05-15 19:46:44.521446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.525 [2024-05-15 19:46:44.521457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.525 qpair failed and we were unable to recover it. 00:31:18.525 [2024-05-15 19:46:44.531409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.525 [2024-05-15 19:46:44.531470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.525 [2024-05-15 19:46:44.531483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.525 [2024-05-15 19:46:44.531488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.525 [2024-05-15 19:46:44.531493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.525 [2024-05-15 19:46:44.531504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.525 qpair failed and we were unable to recover it. 00:31:18.525 [2024-05-15 19:46:44.541446] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.525 [2024-05-15 19:46:44.541541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.525 [2024-05-15 19:46:44.541554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.525 [2024-05-15 19:46:44.541559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.525 [2024-05-15 19:46:44.541566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.525 [2024-05-15 19:46:44.541577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.525 qpair failed and we were unable to recover it. 00:31:18.525 [2024-05-15 19:46:44.551489] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.525 [2024-05-15 19:46:44.551554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.525 [2024-05-15 19:46:44.551566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.525 [2024-05-15 19:46:44.551571] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.525 [2024-05-15 19:46:44.551576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.525 [2024-05-15 19:46:44.551587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.525 qpair failed and we were unable to recover it. 00:31:18.525 [2024-05-15 19:46:44.561372] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.525 [2024-05-15 19:46:44.561439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.525 [2024-05-15 19:46:44.561451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.525 [2024-05-15 19:46:44.561456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.525 [2024-05-15 19:46:44.561461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.525 [2024-05-15 19:46:44.561472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.525 qpair failed and we were unable to recover it. 00:31:18.525 [2024-05-15 19:46:44.571503] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.525 [2024-05-15 19:46:44.571562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.525 [2024-05-15 19:46:44.571574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.525 [2024-05-15 19:46:44.571580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.525 [2024-05-15 19:46:44.571584] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.525 [2024-05-15 19:46:44.571595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.525 qpair failed and we were unable to recover it. 00:31:18.525 [2024-05-15 19:46:44.581543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.525 [2024-05-15 19:46:44.581604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.525 [2024-05-15 19:46:44.581616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.525 [2024-05-15 19:46:44.581622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.525 [2024-05-15 19:46:44.581626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.526 [2024-05-15 19:46:44.581637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.526 qpair failed and we were unable to recover it. 00:31:18.526 [2024-05-15 19:46:44.591560] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.526 [2024-05-15 19:46:44.591627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.526 [2024-05-15 19:46:44.591640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.526 [2024-05-15 19:46:44.591645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.526 [2024-05-15 19:46:44.591650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.526 [2024-05-15 19:46:44.591661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.526 qpair failed and we were unable to recover it. 00:31:18.526 [2024-05-15 19:46:44.601610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.526 [2024-05-15 19:46:44.601680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.526 [2024-05-15 19:46:44.601692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.526 [2024-05-15 19:46:44.601697] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.526 [2024-05-15 19:46:44.601702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.526 [2024-05-15 19:46:44.601713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.526 qpair failed and we were unable to recover it. 00:31:18.526 [2024-05-15 19:46:44.611652] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.526 [2024-05-15 19:46:44.611713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.526 [2024-05-15 19:46:44.611725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.526 [2024-05-15 19:46:44.611730] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.526 [2024-05-15 19:46:44.611735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.526 [2024-05-15 19:46:44.611745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.526 qpair failed and we were unable to recover it. 00:31:18.526 [2024-05-15 19:46:44.621672] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.526 [2024-05-15 19:46:44.621734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.526 [2024-05-15 19:46:44.621747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.526 [2024-05-15 19:46:44.621752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.526 [2024-05-15 19:46:44.621757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.526 [2024-05-15 19:46:44.621767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.526 qpair failed and we were unable to recover it. 00:31:18.526 [2024-05-15 19:46:44.631666] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.526 [2024-05-15 19:46:44.631734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.526 [2024-05-15 19:46:44.631746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.526 [2024-05-15 19:46:44.631754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.526 [2024-05-15 19:46:44.631759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.526 [2024-05-15 19:46:44.631769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.526 qpair failed and we were unable to recover it. 00:31:18.526 [2024-05-15 19:46:44.641713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.526 [2024-05-15 19:46:44.641787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.526 [2024-05-15 19:46:44.641799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.526 [2024-05-15 19:46:44.641805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.526 [2024-05-15 19:46:44.641809] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.526 [2024-05-15 19:46:44.641821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.526 qpair failed and we were unable to recover it. 00:31:18.526 [2024-05-15 19:46:44.651728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.526 [2024-05-15 19:46:44.651786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.526 [2024-05-15 19:46:44.651798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.526 [2024-05-15 19:46:44.651804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.526 [2024-05-15 19:46:44.651808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.526 [2024-05-15 19:46:44.651819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.526 qpair failed and we were unable to recover it. 00:31:18.526 [2024-05-15 19:46:44.661750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.526 [2024-05-15 19:46:44.661816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.526 [2024-05-15 19:46:44.661828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.526 [2024-05-15 19:46:44.661834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.526 [2024-05-15 19:46:44.661838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.526 [2024-05-15 19:46:44.661849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.526 qpair failed and we were unable to recover it. 00:31:18.526 [2024-05-15 19:46:44.671801] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.526 [2024-05-15 19:46:44.671862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.526 [2024-05-15 19:46:44.671874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.526 [2024-05-15 19:46:44.671879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.526 [2024-05-15 19:46:44.671884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.526 [2024-05-15 19:46:44.671895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.526 qpair failed and we were unable to recover it. 00:31:18.526 [2024-05-15 19:46:44.681814] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.526 [2024-05-15 19:46:44.681874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.526 [2024-05-15 19:46:44.681886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.526 [2024-05-15 19:46:44.681891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.526 [2024-05-15 19:46:44.681895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.526 [2024-05-15 19:46:44.681905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.526 qpair failed and we were unable to recover it. 00:31:18.526 [2024-05-15 19:46:44.691855] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.526 [2024-05-15 19:46:44.691918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.526 [2024-05-15 19:46:44.691930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.526 [2024-05-15 19:46:44.691936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.526 [2024-05-15 19:46:44.691940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.526 [2024-05-15 19:46:44.691950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.526 qpair failed and we were unable to recover it. 00:31:18.526 [2024-05-15 19:46:44.701853] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.526 [2024-05-15 19:46:44.701912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.526 [2024-05-15 19:46:44.701924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.526 [2024-05-15 19:46:44.701929] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.526 [2024-05-15 19:46:44.701933] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.526 [2024-05-15 19:46:44.701944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.526 qpair failed and we were unable to recover it. 00:31:18.788 [2024-05-15 19:46:44.711909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.788 [2024-05-15 19:46:44.712011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.788 [2024-05-15 19:46:44.712024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.788 [2024-05-15 19:46:44.712030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.788 [2024-05-15 19:46:44.712035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.788 [2024-05-15 19:46:44.712046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.788 qpair failed and we were unable to recover it. 00:31:18.788 [2024-05-15 19:46:44.721945] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.788 [2024-05-15 19:46:44.722005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.788 [2024-05-15 19:46:44.722017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.788 [2024-05-15 19:46:44.722025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.788 [2024-05-15 19:46:44.722030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.788 [2024-05-15 19:46:44.722040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.788 qpair failed and we were unable to recover it. 00:31:18.788 [2024-05-15 19:46:44.731969] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.788 [2024-05-15 19:46:44.732042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.788 [2024-05-15 19:46:44.732061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.788 [2024-05-15 19:46:44.732068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.788 [2024-05-15 19:46:44.732073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.788 [2024-05-15 19:46:44.732087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.788 qpair failed and we were unable to recover it. 00:31:18.788 [2024-05-15 19:46:44.742013] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.788 [2024-05-15 19:46:44.742088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.788 [2024-05-15 19:46:44.742107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.788 [2024-05-15 19:46:44.742114] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.788 [2024-05-15 19:46:44.742119] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.788 [2024-05-15 19:46:44.742133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.788 qpair failed and we were unable to recover it. 00:31:18.788 [2024-05-15 19:46:44.752044] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.788 [2024-05-15 19:46:44.752114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.788 [2024-05-15 19:46:44.752133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.788 [2024-05-15 19:46:44.752139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.788 [2024-05-15 19:46:44.752144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.788 [2024-05-15 19:46:44.752158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.788 qpair failed and we were unable to recover it. 00:31:18.788 [2024-05-15 19:46:44.762034] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.788 [2024-05-15 19:46:44.762095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.788 [2024-05-15 19:46:44.762114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.788 [2024-05-15 19:46:44.762120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.788 [2024-05-15 19:46:44.762125] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.788 [2024-05-15 19:46:44.762139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.788 qpair failed and we were unable to recover it. 00:31:18.788 [2024-05-15 19:46:44.772063] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.788 [2024-05-15 19:46:44.772120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.788 [2024-05-15 19:46:44.772133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.788 [2024-05-15 19:46:44.772139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.788 [2024-05-15 19:46:44.772143] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.788 [2024-05-15 19:46:44.772155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.788 qpair failed and we were unable to recover it. 00:31:18.788 [2024-05-15 19:46:44.782098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.788 [2024-05-15 19:46:44.782159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.788 [2024-05-15 19:46:44.782171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.788 [2024-05-15 19:46:44.782176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.788 [2024-05-15 19:46:44.782181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.788 [2024-05-15 19:46:44.782192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.788 qpair failed and we were unable to recover it. 00:31:18.788 [2024-05-15 19:46:44.792117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.788 [2024-05-15 19:46:44.792182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.788 [2024-05-15 19:46:44.792194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.788 [2024-05-15 19:46:44.792199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.788 [2024-05-15 19:46:44.792204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.788 [2024-05-15 19:46:44.792214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.788 qpair failed and we were unable to recover it. 00:31:18.788 [2024-05-15 19:46:44.802156] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.788 [2024-05-15 19:46:44.802222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.788 [2024-05-15 19:46:44.802235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.788 [2024-05-15 19:46:44.802240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.788 [2024-05-15 19:46:44.802245] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.788 [2024-05-15 19:46:44.802255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.788 qpair failed and we were unable to recover it. 00:31:18.788 [2024-05-15 19:46:44.812173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.788 [2024-05-15 19:46:44.812231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.789 [2024-05-15 19:46:44.812246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.789 [2024-05-15 19:46:44.812252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.789 [2024-05-15 19:46:44.812256] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.789 [2024-05-15 19:46:44.812267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.789 qpair failed and we were unable to recover it. 00:31:18.789 [2024-05-15 19:46:44.822213] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.789 [2024-05-15 19:46:44.822275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.789 [2024-05-15 19:46:44.822287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.789 [2024-05-15 19:46:44.822292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.789 [2024-05-15 19:46:44.822297] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.789 [2024-05-15 19:46:44.822308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.789 qpair failed and we were unable to recover it. 00:31:18.789 [2024-05-15 19:46:44.832246] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.789 [2024-05-15 19:46:44.832308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.789 [2024-05-15 19:46:44.832324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.789 [2024-05-15 19:46:44.832330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.789 [2024-05-15 19:46:44.832334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.789 [2024-05-15 19:46:44.832346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.789 qpair failed and we were unable to recover it. 00:31:18.789 [2024-05-15 19:46:44.842138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.789 [2024-05-15 19:46:44.842200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.789 [2024-05-15 19:46:44.842212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.789 [2024-05-15 19:46:44.842218] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.789 [2024-05-15 19:46:44.842222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.789 [2024-05-15 19:46:44.842233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.789 qpair failed and we were unable to recover it. 00:31:18.789 [2024-05-15 19:46:44.852286] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.789 [2024-05-15 19:46:44.852351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.789 [2024-05-15 19:46:44.852364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.789 [2024-05-15 19:46:44.852369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.789 [2024-05-15 19:46:44.852374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.789 [2024-05-15 19:46:44.852387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.789 qpair failed and we were unable to recover it. 00:31:18.789 [2024-05-15 19:46:44.862254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.789 [2024-05-15 19:46:44.862322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.789 [2024-05-15 19:46:44.862336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.789 [2024-05-15 19:46:44.862341] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.789 [2024-05-15 19:46:44.862345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.789 [2024-05-15 19:46:44.862357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.789 qpair failed and we were unable to recover it. 00:31:18.789 [2024-05-15 19:46:44.872356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.789 [2024-05-15 19:46:44.872423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.789 [2024-05-15 19:46:44.872436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.789 [2024-05-15 19:46:44.872441] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.789 [2024-05-15 19:46:44.872446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.789 [2024-05-15 19:46:44.872457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.789 qpair failed and we were unable to recover it. 00:31:18.789 [2024-05-15 19:46:44.882371] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.789 [2024-05-15 19:46:44.882469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.789 [2024-05-15 19:46:44.882482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.789 [2024-05-15 19:46:44.882487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.789 [2024-05-15 19:46:44.882492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.789 [2024-05-15 19:46:44.882504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.789 qpair failed and we were unable to recover it. 00:31:18.789 [2024-05-15 19:46:44.892414] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.789 [2024-05-15 19:46:44.892475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.789 [2024-05-15 19:46:44.892488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.789 [2024-05-15 19:46:44.892493] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.789 [2024-05-15 19:46:44.892498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.789 [2024-05-15 19:46:44.892509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.789 qpair failed and we were unable to recover it. 00:31:18.789 [2024-05-15 19:46:44.902468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.789 [2024-05-15 19:46:44.902568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.789 [2024-05-15 19:46:44.902584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.789 [2024-05-15 19:46:44.902589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.789 [2024-05-15 19:46:44.902594] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.789 [2024-05-15 19:46:44.902605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.789 qpair failed and we were unable to recover it. 00:31:18.789 [2024-05-15 19:46:44.912437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.789 [2024-05-15 19:46:44.912509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.789 [2024-05-15 19:46:44.912522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.789 [2024-05-15 19:46:44.912527] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.789 [2024-05-15 19:46:44.912532] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.789 [2024-05-15 19:46:44.912543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.789 qpair failed and we were unable to recover it. 00:31:18.789 [2024-05-15 19:46:44.922489] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.789 [2024-05-15 19:46:44.922546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.789 [2024-05-15 19:46:44.922558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.789 [2024-05-15 19:46:44.922563] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.789 [2024-05-15 19:46:44.922568] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.789 [2024-05-15 19:46:44.922579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.789 qpair failed and we were unable to recover it. 00:31:18.789 [2024-05-15 19:46:44.932521] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.789 [2024-05-15 19:46:44.932582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.789 [2024-05-15 19:46:44.932594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.789 [2024-05-15 19:46:44.932599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.789 [2024-05-15 19:46:44.932604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.789 [2024-05-15 19:46:44.932615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.789 qpair failed and we were unable to recover it. 00:31:18.789 [2024-05-15 19:46:44.942528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.789 [2024-05-15 19:46:44.942596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.789 [2024-05-15 19:46:44.942608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.789 [2024-05-15 19:46:44.942613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.789 [2024-05-15 19:46:44.942623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.789 [2024-05-15 19:46:44.942634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.789 qpair failed and we were unable to recover it. 00:31:18.790 [2024-05-15 19:46:44.952569] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.790 [2024-05-15 19:46:44.952640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.790 [2024-05-15 19:46:44.952653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.790 [2024-05-15 19:46:44.952658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.790 [2024-05-15 19:46:44.952663] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.790 [2024-05-15 19:46:44.952674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.790 qpair failed and we were unable to recover it. 00:31:18.790 [2024-05-15 19:46:44.962669] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.790 [2024-05-15 19:46:44.962771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.790 [2024-05-15 19:46:44.962784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.790 [2024-05-15 19:46:44.962790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.790 [2024-05-15 19:46:44.962795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:18.790 [2024-05-15 19:46:44.962805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:18.790 qpair failed and we were unable to recover it. 00:31:19.052 [2024-05-15 19:46:44.972656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.052 [2024-05-15 19:46:44.972731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.052 [2024-05-15 19:46:44.972743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.052 [2024-05-15 19:46:44.972749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.052 [2024-05-15 19:46:44.972753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.052 [2024-05-15 19:46:44.972764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.052 qpair failed and we were unable to recover it. 00:31:19.052 [2024-05-15 19:46:44.982652] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.052 [2024-05-15 19:46:44.982713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.052 [2024-05-15 19:46:44.982726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.052 [2024-05-15 19:46:44.982731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.052 [2024-05-15 19:46:44.982736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.053 [2024-05-15 19:46:44.982747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.053 qpair failed and we were unable to recover it. 00:31:19.053 [2024-05-15 19:46:44.992681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.053 [2024-05-15 19:46:44.992764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.053 [2024-05-15 19:46:44.992780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.053 [2024-05-15 19:46:44.992786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.053 [2024-05-15 19:46:44.992791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.053 [2024-05-15 19:46:44.992803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.053 qpair failed and we were unable to recover it. 00:31:19.053 [2024-05-15 19:46:45.002740] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.053 [2024-05-15 19:46:45.002796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.053 [2024-05-15 19:46:45.002809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.053 [2024-05-15 19:46:45.002814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.053 [2024-05-15 19:46:45.002818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.053 [2024-05-15 19:46:45.002829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.053 qpair failed and we were unable to recover it. 00:31:19.053 [2024-05-15 19:46:45.012744] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.053 [2024-05-15 19:46:45.012852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.053 [2024-05-15 19:46:45.012865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.053 [2024-05-15 19:46:45.012870] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.053 [2024-05-15 19:46:45.012874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.053 [2024-05-15 19:46:45.012885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.053 qpair failed and we were unable to recover it. 00:31:19.053 [2024-05-15 19:46:45.022639] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.053 [2024-05-15 19:46:45.022699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.053 [2024-05-15 19:46:45.022712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.053 [2024-05-15 19:46:45.022718] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.053 [2024-05-15 19:46:45.022722] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.053 [2024-05-15 19:46:45.022733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.053 qpair failed and we were unable to recover it. 00:31:19.053 [2024-05-15 19:46:45.032804] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.053 [2024-05-15 19:46:45.032869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.053 [2024-05-15 19:46:45.032882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.053 [2024-05-15 19:46:45.032890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.053 [2024-05-15 19:46:45.032895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.053 [2024-05-15 19:46:45.032906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.053 qpair failed and we were unable to recover it. 00:31:19.053 [2024-05-15 19:46:45.042697] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.053 [2024-05-15 19:46:45.042794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.053 [2024-05-15 19:46:45.042807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.053 [2024-05-15 19:46:45.042812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.053 [2024-05-15 19:46:45.042817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.053 [2024-05-15 19:46:45.042828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.053 qpair failed and we were unable to recover it. 00:31:19.053 [2024-05-15 19:46:45.052870] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.053 [2024-05-15 19:46:45.052968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.053 [2024-05-15 19:46:45.052981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.053 [2024-05-15 19:46:45.052986] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.053 [2024-05-15 19:46:45.052990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.053 [2024-05-15 19:46:45.053002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.053 qpair failed and we were unable to recover it. 00:31:19.053 [2024-05-15 19:46:45.062902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.053 [2024-05-15 19:46:45.062970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.053 [2024-05-15 19:46:45.062983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.053 [2024-05-15 19:46:45.062988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.053 [2024-05-15 19:46:45.062993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.053 [2024-05-15 19:46:45.063003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.053 qpair failed and we were unable to recover it. 00:31:19.053 [2024-05-15 19:46:45.072896] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.053 [2024-05-15 19:46:45.072961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.053 [2024-05-15 19:46:45.072974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.053 [2024-05-15 19:46:45.072980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.053 [2024-05-15 19:46:45.072984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.053 [2024-05-15 19:46:45.072995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.053 qpair failed and we were unable to recover it. 00:31:19.053 [2024-05-15 19:46:45.082966] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.053 [2024-05-15 19:46:45.083025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.053 [2024-05-15 19:46:45.083037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.053 [2024-05-15 19:46:45.083043] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.053 [2024-05-15 19:46:45.083047] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.053 [2024-05-15 19:46:45.083058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.053 qpair failed and we were unable to recover it. 00:31:19.053 [2024-05-15 19:46:45.093047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.053 [2024-05-15 19:46:45.093117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.053 [2024-05-15 19:46:45.093129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.053 [2024-05-15 19:46:45.093135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.054 [2024-05-15 19:46:45.093139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.054 [2024-05-15 19:46:45.093150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.054 qpair failed and we were unable to recover it. 00:31:19.054 [2024-05-15 19:46:45.103012] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.054 [2024-05-15 19:46:45.103072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.054 [2024-05-15 19:46:45.103085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.054 [2024-05-15 19:46:45.103091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.054 [2024-05-15 19:46:45.103095] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.054 [2024-05-15 19:46:45.103106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.054 qpair failed and we were unable to recover it. 00:31:19.054 [2024-05-15 19:46:45.113051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.054 [2024-05-15 19:46:45.113116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.054 [2024-05-15 19:46:45.113128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.054 [2024-05-15 19:46:45.113133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.054 [2024-05-15 19:46:45.113138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.054 [2024-05-15 19:46:45.113149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.054 qpair failed and we were unable to recover it. 00:31:19.054 [2024-05-15 19:46:45.123129] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.054 [2024-05-15 19:46:45.123189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.054 [2024-05-15 19:46:45.123202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.054 [2024-05-15 19:46:45.123210] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.054 [2024-05-15 19:46:45.123214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.054 [2024-05-15 19:46:45.123225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.054 qpair failed and we were unable to recover it. 00:31:19.054 [2024-05-15 19:46:45.133030] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.054 [2024-05-15 19:46:45.133168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.054 [2024-05-15 19:46:45.133180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.054 [2024-05-15 19:46:45.133185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.054 [2024-05-15 19:46:45.133190] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.054 [2024-05-15 19:46:45.133201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.054 qpair failed and we were unable to recover it. 00:31:19.054 [2024-05-15 19:46:45.143200] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.054 [2024-05-15 19:46:45.143264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.054 [2024-05-15 19:46:45.143277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.054 [2024-05-15 19:46:45.143282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.054 [2024-05-15 19:46:45.143286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.054 [2024-05-15 19:46:45.143297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.054 qpair failed and we were unable to recover it. 00:31:19.054 [2024-05-15 19:46:45.153116] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.054 [2024-05-15 19:46:45.153181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.054 [2024-05-15 19:46:45.153194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.054 [2024-05-15 19:46:45.153199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.054 [2024-05-15 19:46:45.153204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.054 [2024-05-15 19:46:45.153217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.054 qpair failed and we were unable to recover it. 00:31:19.054 [2024-05-15 19:46:45.163132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.054 [2024-05-15 19:46:45.163190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.054 [2024-05-15 19:46:45.163203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.054 [2024-05-15 19:46:45.163208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.054 [2024-05-15 19:46:45.163213] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.054 [2024-05-15 19:46:45.163223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.054 qpair failed and we were unable to recover it. 00:31:19.054 [2024-05-15 19:46:45.173074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.054 [2024-05-15 19:46:45.173131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.054 [2024-05-15 19:46:45.173144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.054 [2024-05-15 19:46:45.173150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.054 [2024-05-15 19:46:45.173154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.054 [2024-05-15 19:46:45.173165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.054 qpair failed and we were unable to recover it. 00:31:19.054 [2024-05-15 19:46:45.183219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.054 [2024-05-15 19:46:45.183293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.054 [2024-05-15 19:46:45.183305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.054 [2024-05-15 19:46:45.183311] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.054 [2024-05-15 19:46:45.183320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.054 [2024-05-15 19:46:45.183331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.054 qpair failed and we were unable to recover it. 00:31:19.054 [2024-05-15 19:46:45.193215] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.054 [2024-05-15 19:46:45.193304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.054 [2024-05-15 19:46:45.193321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.054 [2024-05-15 19:46:45.193326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.054 [2024-05-15 19:46:45.193331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.054 [2024-05-15 19:46:45.193342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.054 qpair failed and we were unable to recover it. 00:31:19.054 [2024-05-15 19:46:45.203248] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.054 [2024-05-15 19:46:45.203310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.054 [2024-05-15 19:46:45.203326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.054 [2024-05-15 19:46:45.203332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.054 [2024-05-15 19:46:45.203336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.054 [2024-05-15 19:46:45.203347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.054 qpair failed and we were unable to recover it. 00:31:19.054 [2024-05-15 19:46:45.213197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.054 [2024-05-15 19:46:45.213257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.054 [2024-05-15 19:46:45.213272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.054 [2024-05-15 19:46:45.213277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.055 [2024-05-15 19:46:45.213282] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.055 [2024-05-15 19:46:45.213293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.055 qpair failed and we were unable to recover it. 00:31:19.055 [2024-05-15 19:46:45.223328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.055 [2024-05-15 19:46:45.223398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.055 [2024-05-15 19:46:45.223411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.055 [2024-05-15 19:46:45.223416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.055 [2024-05-15 19:46:45.223421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.055 [2024-05-15 19:46:45.223431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.055 qpair failed and we were unable to recover it. 00:31:19.055 [2024-05-15 19:46:45.233321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.055 [2024-05-15 19:46:45.233387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.055 [2024-05-15 19:46:45.233399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.055 [2024-05-15 19:46:45.233404] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.055 [2024-05-15 19:46:45.233409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.055 [2024-05-15 19:46:45.233419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.055 qpair failed and we were unable to recover it. 00:31:19.317 [2024-05-15 19:46:45.243410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.317 [2024-05-15 19:46:45.243470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.317 [2024-05-15 19:46:45.243483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.317 [2024-05-15 19:46:45.243488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.317 [2024-05-15 19:46:45.243493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.317 [2024-05-15 19:46:45.243504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.317 qpair failed and we were unable to recover it. 00:31:19.317 [2024-05-15 19:46:45.253378] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.317 [2024-05-15 19:46:45.253470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.317 [2024-05-15 19:46:45.253482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.317 [2024-05-15 19:46:45.253487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.317 [2024-05-15 19:46:45.253491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.317 [2024-05-15 19:46:45.253506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.317 qpair failed and we were unable to recover it. 00:31:19.317 [2024-05-15 19:46:45.263406] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.317 [2024-05-15 19:46:45.263468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.317 [2024-05-15 19:46:45.263482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.317 [2024-05-15 19:46:45.263488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.317 [2024-05-15 19:46:45.263495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.317 [2024-05-15 19:46:45.263506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.317 qpair failed and we were unable to recover it. 00:31:19.317 [2024-05-15 19:46:45.273457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.317 [2024-05-15 19:46:45.273518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.317 [2024-05-15 19:46:45.273531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.317 [2024-05-15 19:46:45.273537] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.317 [2024-05-15 19:46:45.273541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.317 [2024-05-15 19:46:45.273552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.317 qpair failed and we were unable to recover it. 00:31:19.317 [2024-05-15 19:46:45.283457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.317 [2024-05-15 19:46:45.283542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.317 [2024-05-15 19:46:45.283555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.317 [2024-05-15 19:46:45.283560] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.317 [2024-05-15 19:46:45.283565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.317 [2024-05-15 19:46:45.283576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.317 qpair failed and we were unable to recover it. 00:31:19.317 [2024-05-15 19:46:45.293368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.317 [2024-05-15 19:46:45.293433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.317 [2024-05-15 19:46:45.293445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.317 [2024-05-15 19:46:45.293451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.317 [2024-05-15 19:46:45.293455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.317 [2024-05-15 19:46:45.293466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.317 qpair failed and we were unable to recover it. 00:31:19.317 [2024-05-15 19:46:45.303555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.317 [2024-05-15 19:46:45.303619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.317 [2024-05-15 19:46:45.303634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.317 [2024-05-15 19:46:45.303640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.317 [2024-05-15 19:46:45.303644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.317 [2024-05-15 19:46:45.303655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.317 qpair failed and we were unable to recover it. 00:31:19.317 [2024-05-15 19:46:45.313423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.318 [2024-05-15 19:46:45.313490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.318 [2024-05-15 19:46:45.313502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.318 [2024-05-15 19:46:45.313507] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.318 [2024-05-15 19:46:45.313512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.318 [2024-05-15 19:46:45.313523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.318 qpair failed and we were unable to recover it. 00:31:19.318 [2024-05-15 19:46:45.323577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.318 [2024-05-15 19:46:45.323638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.318 [2024-05-15 19:46:45.323651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.318 [2024-05-15 19:46:45.323657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.318 [2024-05-15 19:46:45.323661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.318 [2024-05-15 19:46:45.323672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.318 qpair failed and we were unable to recover it. 00:31:19.318 [2024-05-15 19:46:45.333615] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.318 [2024-05-15 19:46:45.333713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.318 [2024-05-15 19:46:45.333726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.318 [2024-05-15 19:46:45.333731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.318 [2024-05-15 19:46:45.333736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.318 [2024-05-15 19:46:45.333747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.318 qpair failed and we were unable to recover it. 00:31:19.318 [2024-05-15 19:46:45.343647] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.318 [2024-05-15 19:46:45.343707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.318 [2024-05-15 19:46:45.343719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.318 [2024-05-15 19:46:45.343725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.318 [2024-05-15 19:46:45.343733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.318 [2024-05-15 19:46:45.343743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.318 qpair failed and we were unable to recover it. 00:31:19.318 [2024-05-15 19:46:45.353663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.318 [2024-05-15 19:46:45.353740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.318 [2024-05-15 19:46:45.353752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.318 [2024-05-15 19:46:45.353757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.318 [2024-05-15 19:46:45.353761] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.318 [2024-05-15 19:46:45.353772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.318 qpair failed and we were unable to recover it. 00:31:19.318 [2024-05-15 19:46:45.363674] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.318 [2024-05-15 19:46:45.363788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.318 [2024-05-15 19:46:45.363801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.318 [2024-05-15 19:46:45.363806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.318 [2024-05-15 19:46:45.363811] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.318 [2024-05-15 19:46:45.363822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.318 qpair failed and we were unable to recover it. 00:31:19.318 [2024-05-15 19:46:45.373721] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.318 [2024-05-15 19:46:45.373780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.318 [2024-05-15 19:46:45.373793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.318 [2024-05-15 19:46:45.373798] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.318 [2024-05-15 19:46:45.373802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.318 [2024-05-15 19:46:45.373813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.318 qpair failed and we were unable to recover it. 00:31:19.318 [2024-05-15 19:46:45.383760] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.318 [2024-05-15 19:46:45.383834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.318 [2024-05-15 19:46:45.383847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.318 [2024-05-15 19:46:45.383853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.318 [2024-05-15 19:46:45.383858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.318 [2024-05-15 19:46:45.383869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.318 qpair failed and we were unable to recover it. 00:31:19.318 [2024-05-15 19:46:45.393758] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.318 [2024-05-15 19:46:45.393822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.318 [2024-05-15 19:46:45.393835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.318 [2024-05-15 19:46:45.393840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.318 [2024-05-15 19:46:45.393844] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.318 [2024-05-15 19:46:45.393855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.318 qpair failed and we were unable to recover it. 00:31:19.318 [2024-05-15 19:46:45.403785] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.318 [2024-05-15 19:46:45.403862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.318 [2024-05-15 19:46:45.403874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.318 [2024-05-15 19:46:45.403880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.318 [2024-05-15 19:46:45.403884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.318 [2024-05-15 19:46:45.403895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.318 qpair failed and we were unable to recover it. 00:31:19.318 [2024-05-15 19:46:45.413869] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.318 [2024-05-15 19:46:45.413927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.318 [2024-05-15 19:46:45.413939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.318 [2024-05-15 19:46:45.413944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.318 [2024-05-15 19:46:45.413949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.318 [2024-05-15 19:46:45.413960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.318 qpair failed and we were unable to recover it. 00:31:19.318 [2024-05-15 19:46:45.423848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.318 [2024-05-15 19:46:45.423928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.318 [2024-05-15 19:46:45.423947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.318 [2024-05-15 19:46:45.423953] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.318 [2024-05-15 19:46:45.423958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.318 [2024-05-15 19:46:45.423971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.318 qpair failed and we were unable to recover it. 00:31:19.318 [2024-05-15 19:46:45.433857] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.318 [2024-05-15 19:46:45.433929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.318 [2024-05-15 19:46:45.433947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.318 [2024-05-15 19:46:45.433954] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.318 [2024-05-15 19:46:45.433962] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.318 [2024-05-15 19:46:45.433977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.318 qpair failed and we were unable to recover it. 00:31:19.318 [2024-05-15 19:46:45.443815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.318 [2024-05-15 19:46:45.443911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.318 [2024-05-15 19:46:45.443925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.318 [2024-05-15 19:46:45.443930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.318 [2024-05-15 19:46:45.443935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.318 [2024-05-15 19:46:45.443946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.318 qpair failed and we were unable to recover it. 00:31:19.319 [2024-05-15 19:46:45.453925] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.319 [2024-05-15 19:46:45.453986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.319 [2024-05-15 19:46:45.453998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.319 [2024-05-15 19:46:45.454004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.319 [2024-05-15 19:46:45.454008] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.319 [2024-05-15 19:46:45.454019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.319 qpair failed and we were unable to recover it. 00:31:19.319 [2024-05-15 19:46:45.463924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.319 [2024-05-15 19:46:45.463990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.319 [2024-05-15 19:46:45.464009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.319 [2024-05-15 19:46:45.464016] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.319 [2024-05-15 19:46:45.464020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.319 [2024-05-15 19:46:45.464034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.319 qpair failed and we were unable to recover it. 00:31:19.319 [2024-05-15 19:46:45.473885] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.319 [2024-05-15 19:46:45.473981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.319 [2024-05-15 19:46:45.473997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.319 [2024-05-15 19:46:45.474002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.319 [2024-05-15 19:46:45.474007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.319 [2024-05-15 19:46:45.474020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.319 qpair failed and we were unable to recover it. 00:31:19.319 [2024-05-15 19:46:45.484035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.319 [2024-05-15 19:46:45.484098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.319 [2024-05-15 19:46:45.484111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.319 [2024-05-15 19:46:45.484117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.319 [2024-05-15 19:46:45.484121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.319 [2024-05-15 19:46:45.484133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.319 qpair failed and we were unable to recover it. 00:31:19.319 [2024-05-15 19:46:45.494034] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.319 [2024-05-15 19:46:45.494092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.319 [2024-05-15 19:46:45.494104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.319 [2024-05-15 19:46:45.494110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.319 [2024-05-15 19:46:45.494114] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.319 [2024-05-15 19:46:45.494125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.319 qpair failed and we were unable to recover it. 00:31:19.581 [2024-05-15 19:46:45.504081] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.581 [2024-05-15 19:46:45.504145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.581 [2024-05-15 19:46:45.504157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.581 [2024-05-15 19:46:45.504163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.581 [2024-05-15 19:46:45.504167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.581 [2024-05-15 19:46:45.504178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.581 qpair failed and we were unable to recover it. 00:31:19.581 [2024-05-15 19:46:45.514113] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.581 [2024-05-15 19:46:45.514176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.581 [2024-05-15 19:46:45.514189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.581 [2024-05-15 19:46:45.514194] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.581 [2024-05-15 19:46:45.514199] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.581 [2024-05-15 19:46:45.514210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.581 qpair failed and we were unable to recover it. 00:31:19.581 [2024-05-15 19:46:45.524130] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.581 [2024-05-15 19:46:45.524187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.581 [2024-05-15 19:46:45.524199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.581 [2024-05-15 19:46:45.524208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.581 [2024-05-15 19:46:45.524213] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.581 [2024-05-15 19:46:45.524225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.581 qpair failed and we were unable to recover it. 00:31:19.581 [2024-05-15 19:46:45.534168] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.581 [2024-05-15 19:46:45.534228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.581 [2024-05-15 19:46:45.534240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.581 [2024-05-15 19:46:45.534246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.581 [2024-05-15 19:46:45.534250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.581 [2024-05-15 19:46:45.534261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.581 qpair failed and we were unable to recover it. 00:31:19.581 [2024-05-15 19:46:45.544193] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.581 [2024-05-15 19:46:45.544257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.581 [2024-05-15 19:46:45.544270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.581 [2024-05-15 19:46:45.544276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.581 [2024-05-15 19:46:45.544280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.581 [2024-05-15 19:46:45.544291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.581 qpair failed and we were unable to recover it. 00:31:19.581 [2024-05-15 19:46:45.554227] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.581 [2024-05-15 19:46:45.554292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.582 [2024-05-15 19:46:45.554305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.582 [2024-05-15 19:46:45.554310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.582 [2024-05-15 19:46:45.554319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.582 [2024-05-15 19:46:45.554330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.582 qpair failed and we were unable to recover it. 00:31:19.582 [2024-05-15 19:46:45.564235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.582 [2024-05-15 19:46:45.564292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.582 [2024-05-15 19:46:45.564304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.582 [2024-05-15 19:46:45.564310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.582 [2024-05-15 19:46:45.564319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.582 [2024-05-15 19:46:45.564331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.582 qpair failed and we were unable to recover it. 00:31:19.582 [2024-05-15 19:46:45.574277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.582 [2024-05-15 19:46:45.574341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.582 [2024-05-15 19:46:45.574354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.582 [2024-05-15 19:46:45.574359] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.582 [2024-05-15 19:46:45.574364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.582 [2024-05-15 19:46:45.574375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.582 qpair failed and we were unable to recover it. 00:31:19.582 [2024-05-15 19:46:45.584327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.582 [2024-05-15 19:46:45.584399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.582 [2024-05-15 19:46:45.584412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.582 [2024-05-15 19:46:45.584418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.582 [2024-05-15 19:46:45.584422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.582 [2024-05-15 19:46:45.584433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.582 qpair failed and we were unable to recover it. 00:31:19.582 [2024-05-15 19:46:45.594293] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.582 [2024-05-15 19:46:45.594363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.582 [2024-05-15 19:46:45.594376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.582 [2024-05-15 19:46:45.594381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.582 [2024-05-15 19:46:45.594386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.582 [2024-05-15 19:46:45.594397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.582 qpair failed and we were unable to recover it. 00:31:19.582 [2024-05-15 19:46:45.604355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.582 [2024-05-15 19:46:45.604431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.582 [2024-05-15 19:46:45.604444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.582 [2024-05-15 19:46:45.604449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.582 [2024-05-15 19:46:45.604454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.582 [2024-05-15 19:46:45.604465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.582 qpair failed and we were unable to recover it. 00:31:19.582 [2024-05-15 19:46:45.614407] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.582 [2024-05-15 19:46:45.614470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.582 [2024-05-15 19:46:45.614485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.582 [2024-05-15 19:46:45.614491] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.582 [2024-05-15 19:46:45.614496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.582 [2024-05-15 19:46:45.614507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.582 qpair failed and we were unable to recover it. 00:31:19.582 [2024-05-15 19:46:45.624311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.582 [2024-05-15 19:46:45.624377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.582 [2024-05-15 19:46:45.624390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.582 [2024-05-15 19:46:45.624395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.582 [2024-05-15 19:46:45.624400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.582 [2024-05-15 19:46:45.624411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.582 qpair failed and we were unable to recover it. 00:31:19.582 [2024-05-15 19:46:45.634459] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.582 [2024-05-15 19:46:45.634552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.582 [2024-05-15 19:46:45.634564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.582 [2024-05-15 19:46:45.634569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.582 [2024-05-15 19:46:45.634574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.582 [2024-05-15 19:46:45.634585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.582 qpair failed and we were unable to recover it. 00:31:19.582 [2024-05-15 19:46:45.644512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.582 [2024-05-15 19:46:45.644567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.582 [2024-05-15 19:46:45.644579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.582 [2024-05-15 19:46:45.644585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.582 [2024-05-15 19:46:45.644589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.582 [2024-05-15 19:46:45.644599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.582 qpair failed and we were unable to recover it. 00:31:19.582 [2024-05-15 19:46:45.654409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.582 [2024-05-15 19:46:45.654465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.582 [2024-05-15 19:46:45.654478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.582 [2024-05-15 19:46:45.654483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.582 [2024-05-15 19:46:45.654488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.582 [2024-05-15 19:46:45.654501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.582 qpair failed and we were unable to recover it. 00:31:19.582 [2024-05-15 19:46:45.664551] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.582 [2024-05-15 19:46:45.664637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.582 [2024-05-15 19:46:45.664650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.582 [2024-05-15 19:46:45.664655] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.582 [2024-05-15 19:46:45.664660] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.582 [2024-05-15 19:46:45.664670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.582 qpair failed and we were unable to recover it. 00:31:19.582 [2024-05-15 19:46:45.674611] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.582 [2024-05-15 19:46:45.674699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.582 [2024-05-15 19:46:45.674711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.582 [2024-05-15 19:46:45.674717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.582 [2024-05-15 19:46:45.674721] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.582 [2024-05-15 19:46:45.674732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.582 qpair failed and we were unable to recover it. 00:31:19.582 [2024-05-15 19:46:45.684610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.582 [2024-05-15 19:46:45.684665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.582 [2024-05-15 19:46:45.684677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.582 [2024-05-15 19:46:45.684683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.582 [2024-05-15 19:46:45.684687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.582 [2024-05-15 19:46:45.684697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.582 qpair failed and we were unable to recover it. 00:31:19.582 [2024-05-15 19:46:45.694630] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.583 [2024-05-15 19:46:45.694689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.583 [2024-05-15 19:46:45.694702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.583 [2024-05-15 19:46:45.694707] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.583 [2024-05-15 19:46:45.694712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.583 [2024-05-15 19:46:45.694722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.583 qpair failed and we were unable to recover it. 00:31:19.583 [2024-05-15 19:46:45.704644] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.583 [2024-05-15 19:46:45.704706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.583 [2024-05-15 19:46:45.704723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.583 [2024-05-15 19:46:45.704728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.583 [2024-05-15 19:46:45.704733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.583 [2024-05-15 19:46:45.704743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.583 qpair failed and we were unable to recover it. 00:31:19.583 [2024-05-15 19:46:45.714713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.583 [2024-05-15 19:46:45.714794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.583 [2024-05-15 19:46:45.714806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.583 [2024-05-15 19:46:45.714812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.583 [2024-05-15 19:46:45.714817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.583 [2024-05-15 19:46:45.714827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.583 qpair failed and we were unable to recover it. 00:31:19.583 [2024-05-15 19:46:45.724699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.583 [2024-05-15 19:46:45.724786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.583 [2024-05-15 19:46:45.724799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.583 [2024-05-15 19:46:45.724804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.583 [2024-05-15 19:46:45.724808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.583 [2024-05-15 19:46:45.724819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.583 qpair failed and we were unable to recover it. 00:31:19.583 [2024-05-15 19:46:45.734746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.583 [2024-05-15 19:46:45.734808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.583 [2024-05-15 19:46:45.734821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.583 [2024-05-15 19:46:45.734827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.583 [2024-05-15 19:46:45.734832] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.583 [2024-05-15 19:46:45.734842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.583 qpair failed and we were unable to recover it. 00:31:19.583 [2024-05-15 19:46:45.744766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.583 [2024-05-15 19:46:45.744827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.583 [2024-05-15 19:46:45.744840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.583 [2024-05-15 19:46:45.744845] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.583 [2024-05-15 19:46:45.744852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.583 [2024-05-15 19:46:45.744863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.583 qpair failed and we were unable to recover it. 00:31:19.583 [2024-05-15 19:46:45.754903] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.583 [2024-05-15 19:46:45.754963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.583 [2024-05-15 19:46:45.754975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.583 [2024-05-15 19:46:45.754980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.583 [2024-05-15 19:46:45.754985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.583 [2024-05-15 19:46:45.754996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.583 qpair failed and we were unable to recover it. 00:31:19.845 [2024-05-15 19:46:45.764800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.845 [2024-05-15 19:46:45.764908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.845 [2024-05-15 19:46:45.764922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.845 [2024-05-15 19:46:45.764927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.845 [2024-05-15 19:46:45.764932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.845 [2024-05-15 19:46:45.764943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.845 qpair failed and we were unable to recover it. 00:31:19.845 [2024-05-15 19:46:45.774850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.845 [2024-05-15 19:46:45.774918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.845 [2024-05-15 19:46:45.774936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.845 [2024-05-15 19:46:45.774943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.845 [2024-05-15 19:46:45.774948] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.845 [2024-05-15 19:46:45.774961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.845 qpair failed and we were unable to recover it. 00:31:19.845 [2024-05-15 19:46:45.784823] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.845 [2024-05-15 19:46:45.784891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.845 [2024-05-15 19:46:45.784910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.845 [2024-05-15 19:46:45.784917] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.845 [2024-05-15 19:46:45.784921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.845 [2024-05-15 19:46:45.784935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.845 qpair failed and we were unable to recover it. 00:31:19.845 [2024-05-15 19:46:45.794932] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.845 [2024-05-15 19:46:45.795005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.845 [2024-05-15 19:46:45.795019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.845 [2024-05-15 19:46:45.795025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.845 [2024-05-15 19:46:45.795029] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.845 [2024-05-15 19:46:45.795041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.845 qpair failed and we were unable to recover it. 00:31:19.845 [2024-05-15 19:46:45.804828] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.845 [2024-05-15 19:46:45.804927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.845 [2024-05-15 19:46:45.804940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.845 [2024-05-15 19:46:45.804946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.845 [2024-05-15 19:46:45.804950] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.845 [2024-05-15 19:46:45.804961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.845 qpair failed and we were unable to recover it. 00:31:19.845 [2024-05-15 19:46:45.814951] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.845 [2024-05-15 19:46:45.815013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.845 [2024-05-15 19:46:45.815032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.845 [2024-05-15 19:46:45.815038] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.845 [2024-05-15 19:46:45.815043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.845 [2024-05-15 19:46:45.815057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.845 qpair failed and we were unable to recover it. 00:31:19.845 [2024-05-15 19:46:45.824976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.845 [2024-05-15 19:46:45.825043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.845 [2024-05-15 19:46:45.825062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.845 [2024-05-15 19:46:45.825068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.845 [2024-05-15 19:46:45.825073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.845 [2024-05-15 19:46:45.825087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.845 qpair failed and we were unable to recover it. 00:31:19.845 [2024-05-15 19:46:45.835020] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.846 [2024-05-15 19:46:45.835113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.846 [2024-05-15 19:46:45.835132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.846 [2024-05-15 19:46:45.835139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.846 [2024-05-15 19:46:45.835148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.846 [2024-05-15 19:46:45.835162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.846 qpair failed and we were unable to recover it. 00:31:19.846 [2024-05-15 19:46:45.844981] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.846 [2024-05-15 19:46:45.845078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.846 [2024-05-15 19:46:45.845097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.846 [2024-05-15 19:46:45.845103] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.846 [2024-05-15 19:46:45.845108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.846 [2024-05-15 19:46:45.845122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.846 qpair failed and we were unable to recover it. 00:31:19.846 [2024-05-15 19:46:45.855054] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.846 [2024-05-15 19:46:45.855119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.846 [2024-05-15 19:46:45.855138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.846 [2024-05-15 19:46:45.855144] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.846 [2024-05-15 19:46:45.855149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.846 [2024-05-15 19:46:45.855162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.846 qpair failed and we were unable to recover it. 00:31:19.846 [2024-05-15 19:46:45.865090] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.846 [2024-05-15 19:46:45.865153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.846 [2024-05-15 19:46:45.865167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.846 [2024-05-15 19:46:45.865173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.846 [2024-05-15 19:46:45.865177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.846 [2024-05-15 19:46:45.865188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.846 qpair failed and we were unable to recover it. 00:31:19.846 [2024-05-15 19:46:45.875108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.846 [2024-05-15 19:46:45.875172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.846 [2024-05-15 19:46:45.875185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.846 [2024-05-15 19:46:45.875190] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.846 [2024-05-15 19:46:45.875195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.846 [2024-05-15 19:46:45.875205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.846 qpair failed and we were unable to recover it. 00:31:19.846 [2024-05-15 19:46:45.885132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.846 [2024-05-15 19:46:45.885214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.846 [2024-05-15 19:46:45.885227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.846 [2024-05-15 19:46:45.885232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.846 [2024-05-15 19:46:45.885237] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.846 [2024-05-15 19:46:45.885248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.846 qpair failed and we were unable to recover it. 00:31:19.846 [2024-05-15 19:46:45.895173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.846 [2024-05-15 19:46:45.895230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.846 [2024-05-15 19:46:45.895242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.846 [2024-05-15 19:46:45.895247] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.846 [2024-05-15 19:46:45.895252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.846 [2024-05-15 19:46:45.895263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.846 qpair failed and we were unable to recover it. 00:31:19.846 [2024-05-15 19:46:45.905119] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.846 [2024-05-15 19:46:45.905212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.846 [2024-05-15 19:46:45.905224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.846 [2024-05-15 19:46:45.905229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.846 [2024-05-15 19:46:45.905235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.846 [2024-05-15 19:46:45.905246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.846 qpair failed and we were unable to recover it. 00:31:19.846 [2024-05-15 19:46:45.915108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.846 [2024-05-15 19:46:45.915173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.846 [2024-05-15 19:46:45.915186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.846 [2024-05-15 19:46:45.915191] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.846 [2024-05-15 19:46:45.915195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.846 [2024-05-15 19:46:45.915206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.846 qpair failed and we were unable to recover it. 00:31:19.846 [2024-05-15 19:46:45.925233] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.846 [2024-05-15 19:46:45.925293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.846 [2024-05-15 19:46:45.925306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.846 [2024-05-15 19:46:45.925317] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.846 [2024-05-15 19:46:45.925322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.846 [2024-05-15 19:46:45.925333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.846 qpair failed and we were unable to recover it. 00:31:19.846 [2024-05-15 19:46:45.935261] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.846 [2024-05-15 19:46:45.935323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.846 [2024-05-15 19:46:45.935336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.846 [2024-05-15 19:46:45.935341] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.846 [2024-05-15 19:46:45.935345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.846 [2024-05-15 19:46:45.935356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.846 qpair failed and we were unable to recover it. 00:31:19.846 [2024-05-15 19:46:45.945291] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.846 [2024-05-15 19:46:45.945357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.846 [2024-05-15 19:46:45.945369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.846 [2024-05-15 19:46:45.945375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.846 [2024-05-15 19:46:45.945379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.846 [2024-05-15 19:46:45.945390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.846 qpair failed and we were unable to recover it. 00:31:19.846 [2024-05-15 19:46:45.955317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.846 [2024-05-15 19:46:45.955398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.846 [2024-05-15 19:46:45.955411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.846 [2024-05-15 19:46:45.955416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.846 [2024-05-15 19:46:45.955420] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.846 [2024-05-15 19:46:45.955431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.846 qpair failed and we were unable to recover it. 00:31:19.846 [2024-05-15 19:46:45.965364] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.846 [2024-05-15 19:46:45.965421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.846 [2024-05-15 19:46:45.965434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.846 [2024-05-15 19:46:45.965439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.846 [2024-05-15 19:46:45.965444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.846 [2024-05-15 19:46:45.965454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.846 qpair failed and we were unable to recover it. 00:31:19.847 [2024-05-15 19:46:45.975380] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.847 [2024-05-15 19:46:45.975440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.847 [2024-05-15 19:46:45.975452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.847 [2024-05-15 19:46:45.975458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.847 [2024-05-15 19:46:45.975462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.847 [2024-05-15 19:46:45.975473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.847 qpair failed and we were unable to recover it. 00:31:19.847 [2024-05-15 19:46:45.985401] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.847 [2024-05-15 19:46:45.985494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.847 [2024-05-15 19:46:45.985506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.847 [2024-05-15 19:46:45.985511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.847 [2024-05-15 19:46:45.985516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.847 [2024-05-15 19:46:45.985527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.847 qpair failed and we were unable to recover it. 00:31:19.847 [2024-05-15 19:46:45.995420] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.847 [2024-05-15 19:46:45.995486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.847 [2024-05-15 19:46:45.995498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.847 [2024-05-15 19:46:45.995503] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.847 [2024-05-15 19:46:45.995508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.847 [2024-05-15 19:46:45.995518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.847 qpair failed and we were unable to recover it. 00:31:19.847 [2024-05-15 19:46:46.005464] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.847 [2024-05-15 19:46:46.005524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.847 [2024-05-15 19:46:46.005537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.847 [2024-05-15 19:46:46.005542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.847 [2024-05-15 19:46:46.005547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.847 [2024-05-15 19:46:46.005559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.847 qpair failed and we were unable to recover it. 00:31:19.847 [2024-05-15 19:46:46.015385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.847 [2024-05-15 19:46:46.015446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.847 [2024-05-15 19:46:46.015461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.847 [2024-05-15 19:46:46.015467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.847 [2024-05-15 19:46:46.015471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.847 [2024-05-15 19:46:46.015482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.847 qpair failed and we were unable to recover it. 00:31:19.847 [2024-05-15 19:46:46.025540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.847 [2024-05-15 19:46:46.025603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.847 [2024-05-15 19:46:46.025615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.847 [2024-05-15 19:46:46.025620] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.847 [2024-05-15 19:46:46.025624] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:19.847 [2024-05-15 19:46:46.025635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.847 qpair failed and we were unable to recover it. 00:31:20.183 [2024-05-15 19:46:46.035437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.183 [2024-05-15 19:46:46.035501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.183 [2024-05-15 19:46:46.035514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.183 [2024-05-15 19:46:46.035519] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.183 [2024-05-15 19:46:46.035523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.183 [2024-05-15 19:46:46.035534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.183 qpair failed and we were unable to recover it. 00:31:20.183 [2024-05-15 19:46:46.045639] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.183 [2024-05-15 19:46:46.045740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.183 [2024-05-15 19:46:46.045753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.183 [2024-05-15 19:46:46.045759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.183 [2024-05-15 19:46:46.045764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.183 [2024-05-15 19:46:46.045774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.183 qpair failed and we were unable to recover it. 00:31:20.183 [2024-05-15 19:46:46.055588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.183 [2024-05-15 19:46:46.055686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.183 [2024-05-15 19:46:46.055699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.183 [2024-05-15 19:46:46.055704] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.183 [2024-05-15 19:46:46.055709] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.183 [2024-05-15 19:46:46.055722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.183 qpair failed and we were unable to recover it. 00:31:20.183 [2024-05-15 19:46:46.065642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.183 [2024-05-15 19:46:46.065729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.183 [2024-05-15 19:46:46.065741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.183 [2024-05-15 19:46:46.065746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.184 [2024-05-15 19:46:46.065751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.184 [2024-05-15 19:46:46.065762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.184 qpair failed and we were unable to recover it. 00:31:20.184 [2024-05-15 19:46:46.075657] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.184 [2024-05-15 19:46:46.075724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.184 [2024-05-15 19:46:46.075737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.184 [2024-05-15 19:46:46.075742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.184 [2024-05-15 19:46:46.075746] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.184 [2024-05-15 19:46:46.075757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.184 qpair failed and we were unable to recover it. 00:31:20.184 [2024-05-15 19:46:46.085679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.184 [2024-05-15 19:46:46.085740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.184 [2024-05-15 19:46:46.085753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.184 [2024-05-15 19:46:46.085758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.184 [2024-05-15 19:46:46.085763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.184 [2024-05-15 19:46:46.085774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.184 qpair failed and we were unable to recover it. 00:31:20.184 [2024-05-15 19:46:46.095708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.184 [2024-05-15 19:46:46.095769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.184 [2024-05-15 19:46:46.095781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.184 [2024-05-15 19:46:46.095787] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.184 [2024-05-15 19:46:46.095792] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.184 [2024-05-15 19:46:46.095802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.184 qpair failed and we were unable to recover it. 00:31:20.184 [2024-05-15 19:46:46.105743] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.184 [2024-05-15 19:46:46.105807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.184 [2024-05-15 19:46:46.105823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.184 [2024-05-15 19:46:46.105828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.184 [2024-05-15 19:46:46.105833] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.184 [2024-05-15 19:46:46.105844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.184 qpair failed and we were unable to recover it. 00:31:20.184 [2024-05-15 19:46:46.115771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.184 [2024-05-15 19:46:46.115834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.184 [2024-05-15 19:46:46.115847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.184 [2024-05-15 19:46:46.115853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.184 [2024-05-15 19:46:46.115857] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.184 [2024-05-15 19:46:46.115868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.184 qpair failed and we were unable to recover it. 00:31:20.184 [2024-05-15 19:46:46.125793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.184 [2024-05-15 19:46:46.125851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.184 [2024-05-15 19:46:46.125864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.184 [2024-05-15 19:46:46.125869] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.184 [2024-05-15 19:46:46.125874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.184 [2024-05-15 19:46:46.125884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.184 qpair failed and we were unable to recover it. 00:31:20.184 [2024-05-15 19:46:46.135807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.184 [2024-05-15 19:46:46.135865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.184 [2024-05-15 19:46:46.135878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.184 [2024-05-15 19:46:46.135883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.184 [2024-05-15 19:46:46.135888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.184 [2024-05-15 19:46:46.135898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.184 qpair failed and we were unable to recover it. 00:31:20.184 [2024-05-15 19:46:46.145925] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.184 [2024-05-15 19:46:46.145983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.184 [2024-05-15 19:46:46.145995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.184 [2024-05-15 19:46:46.146001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.184 [2024-05-15 19:46:46.146005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.184 [2024-05-15 19:46:46.146019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.184 qpair failed and we were unable to recover it. 00:31:20.184 [2024-05-15 19:46:46.155817] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.184 [2024-05-15 19:46:46.155885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.184 [2024-05-15 19:46:46.155898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.184 [2024-05-15 19:46:46.155903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.184 [2024-05-15 19:46:46.155908] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.184 [2024-05-15 19:46:46.155919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.184 qpair failed and we were unable to recover it. 00:31:20.184 [2024-05-15 19:46:46.165950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.184 [2024-05-15 19:46:46.166027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.184 [2024-05-15 19:46:46.166040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.184 [2024-05-15 19:46:46.166045] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.184 [2024-05-15 19:46:46.166050] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.184 [2024-05-15 19:46:46.166061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.184 qpair failed and we were unable to recover it. 00:31:20.185 [2024-05-15 19:46:46.175908] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.185 [2024-05-15 19:46:46.175979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.185 [2024-05-15 19:46:46.175992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.185 [2024-05-15 19:46:46.175997] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.185 [2024-05-15 19:46:46.176001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.185 [2024-05-15 19:46:46.176012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.185 qpair failed and we were unable to recover it. 00:31:20.185 [2024-05-15 19:46:46.185971] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.185 [2024-05-15 19:46:46.186065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.185 [2024-05-15 19:46:46.186080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.185 [2024-05-15 19:46:46.186085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.185 [2024-05-15 19:46:46.186089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.185 [2024-05-15 19:46:46.186101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.185 qpair failed and we were unable to recover it. 00:31:20.185 [2024-05-15 19:46:46.195995] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.185 [2024-05-15 19:46:46.196067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.185 [2024-05-15 19:46:46.196086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.185 [2024-05-15 19:46:46.196093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.185 [2024-05-15 19:46:46.196098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.185 [2024-05-15 19:46:46.196111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.185 qpair failed and we were unable to recover it. 00:31:20.185 [2024-05-15 19:46:46.206022] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.185 [2024-05-15 19:46:46.206087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.185 [2024-05-15 19:46:46.206106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.185 [2024-05-15 19:46:46.206112] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.185 [2024-05-15 19:46:46.206117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.185 [2024-05-15 19:46:46.206131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.185 qpair failed and we were unable to recover it. 00:31:20.185 [2024-05-15 19:46:46.215934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.185 [2024-05-15 19:46:46.216008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.185 [2024-05-15 19:46:46.216022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.185 [2024-05-15 19:46:46.216027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.185 [2024-05-15 19:46:46.216032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.185 [2024-05-15 19:46:46.216044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.185 qpair failed and we were unable to recover it. 00:31:20.185 [2024-05-15 19:46:46.226110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.185 [2024-05-15 19:46:46.226187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.185 [2024-05-15 19:46:46.226200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.185 [2024-05-15 19:46:46.226205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.185 [2024-05-15 19:46:46.226210] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.185 [2024-05-15 19:46:46.226221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.185 qpair failed and we were unable to recover it. 00:31:20.185 [2024-05-15 19:46:46.236116] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.185 [2024-05-15 19:46:46.236181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.185 [2024-05-15 19:46:46.236194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.185 [2024-05-15 19:46:46.236199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.185 [2024-05-15 19:46:46.236207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.185 [2024-05-15 19:46:46.236219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.185 qpair failed and we were unable to recover it. 00:31:20.185 [2024-05-15 19:46:46.246097] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.185 [2024-05-15 19:46:46.246183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.185 [2024-05-15 19:46:46.246196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.185 [2024-05-15 19:46:46.246202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.185 [2024-05-15 19:46:46.246206] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.185 [2024-05-15 19:46:46.246217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.185 qpair failed and we were unable to recover it. 00:31:20.185 [2024-05-15 19:46:46.256146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.185 [2024-05-15 19:46:46.256288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.185 [2024-05-15 19:46:46.256301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.185 [2024-05-15 19:46:46.256306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.185 [2024-05-15 19:46:46.256311] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.185 [2024-05-15 19:46:46.256334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.185 qpair failed and we were unable to recover it. 00:31:20.185 [2024-05-15 19:46:46.266196] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.185 [2024-05-15 19:46:46.266254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.185 [2024-05-15 19:46:46.266267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.185 [2024-05-15 19:46:46.266272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.185 [2024-05-15 19:46:46.266276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.185 [2024-05-15 19:46:46.266287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.185 qpair failed and we were unable to recover it. 00:31:20.185 [2024-05-15 19:46:46.276229] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.185 [2024-05-15 19:46:46.276290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.185 [2024-05-15 19:46:46.276302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.185 [2024-05-15 19:46:46.276308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.186 [2024-05-15 19:46:46.276315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.186 [2024-05-15 19:46:46.276327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.186 qpair failed and we were unable to recover it. 00:31:20.186 [2024-05-15 19:46:46.286263] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.186 [2024-05-15 19:46:46.286355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.186 [2024-05-15 19:46:46.286368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.186 [2024-05-15 19:46:46.286373] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.186 [2024-05-15 19:46:46.286378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.186 [2024-05-15 19:46:46.286389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.186 qpair failed and we were unable to recover it. 00:31:20.186 [2024-05-15 19:46:46.296262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.186 [2024-05-15 19:46:46.296321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.186 [2024-05-15 19:46:46.296334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.186 [2024-05-15 19:46:46.296340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.186 [2024-05-15 19:46:46.296344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.186 [2024-05-15 19:46:46.296355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.186 qpair failed and we were unable to recover it. 00:31:20.186 [2024-05-15 19:46:46.306361] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.186 [2024-05-15 19:46:46.306434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.186 [2024-05-15 19:46:46.306447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.186 [2024-05-15 19:46:46.306452] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.186 [2024-05-15 19:46:46.306456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.186 [2024-05-15 19:46:46.306467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.186 qpair failed and we were unable to recover it. 00:31:20.186 [2024-05-15 19:46:46.316352] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.186 [2024-05-15 19:46:46.316414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.186 [2024-05-15 19:46:46.316427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.186 [2024-05-15 19:46:46.316432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.186 [2024-05-15 19:46:46.316436] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.186 [2024-05-15 19:46:46.316447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.186 qpair failed and we were unable to recover it. 00:31:20.186 [2024-05-15 19:46:46.326407] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.186 [2024-05-15 19:46:46.326466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.186 [2024-05-15 19:46:46.326478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.186 [2024-05-15 19:46:46.326486] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.186 [2024-05-15 19:46:46.326491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.186 [2024-05-15 19:46:46.326502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.186 qpair failed and we were unable to recover it. 00:31:20.186 [2024-05-15 19:46:46.336400] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.186 [2024-05-15 19:46:46.336461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.186 [2024-05-15 19:46:46.336473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.186 [2024-05-15 19:46:46.336478] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.186 [2024-05-15 19:46:46.336483] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.186 [2024-05-15 19:46:46.336494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.186 qpair failed and we were unable to recover it. 00:31:20.186 [2024-05-15 19:46:46.346420] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.186 [2024-05-15 19:46:46.346479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.186 [2024-05-15 19:46:46.346492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.186 [2024-05-15 19:46:46.346497] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.186 [2024-05-15 19:46:46.346501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.186 [2024-05-15 19:46:46.346512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.186 qpair failed and we were unable to recover it. 00:31:20.186 [2024-05-15 19:46:46.356475] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.186 [2024-05-15 19:46:46.356539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.186 [2024-05-15 19:46:46.356552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.186 [2024-05-15 19:46:46.356557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.186 [2024-05-15 19:46:46.356561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.186 [2024-05-15 19:46:46.356572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.186 qpair failed and we were unable to recover it. 00:31:20.450 [2024-05-15 19:46:46.366490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.450 [2024-05-15 19:46:46.366552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.450 [2024-05-15 19:46:46.366565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.450 [2024-05-15 19:46:46.366571] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.450 [2024-05-15 19:46:46.366576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.450 [2024-05-15 19:46:46.366587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.450 qpair failed and we were unable to recover it. 00:31:20.450 [2024-05-15 19:46:46.376594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.450 [2024-05-15 19:46:46.376655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.450 [2024-05-15 19:46:46.376667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.450 [2024-05-15 19:46:46.376672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.450 [2024-05-15 19:46:46.376677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.450 [2024-05-15 19:46:46.376688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.450 qpair failed and we were unable to recover it. 00:31:20.450 [2024-05-15 19:46:46.386531] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.450 [2024-05-15 19:46:46.386591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.450 [2024-05-15 19:46:46.386603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.450 [2024-05-15 19:46:46.386609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.450 [2024-05-15 19:46:46.386613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.450 [2024-05-15 19:46:46.386624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.450 qpair failed and we were unable to recover it. 00:31:20.450 [2024-05-15 19:46:46.396563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.450 [2024-05-15 19:46:46.396628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.450 [2024-05-15 19:46:46.396640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.450 [2024-05-15 19:46:46.396646] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.450 [2024-05-15 19:46:46.396650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.450 [2024-05-15 19:46:46.396661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.450 qpair failed and we were unable to recover it. 00:31:20.450 [2024-05-15 19:46:46.406593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.450 [2024-05-15 19:46:46.406655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.450 [2024-05-15 19:46:46.406667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.450 [2024-05-15 19:46:46.406672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.450 [2024-05-15 19:46:46.406677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.450 [2024-05-15 19:46:46.406687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.450 qpair failed and we were unable to recover it. 00:31:20.450 [2024-05-15 19:46:46.416501] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.450 [2024-05-15 19:46:46.416562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.450 [2024-05-15 19:46:46.416575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.450 [2024-05-15 19:46:46.416583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.450 [2024-05-15 19:46:46.416588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.450 [2024-05-15 19:46:46.416598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.450 qpair failed and we were unable to recover it. 00:31:20.450 [2024-05-15 19:46:46.426546] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.450 [2024-05-15 19:46:46.426608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.450 [2024-05-15 19:46:46.426620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.450 [2024-05-15 19:46:46.426625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.450 [2024-05-15 19:46:46.426630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.450 [2024-05-15 19:46:46.426641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.450 qpair failed and we were unable to recover it. 00:31:20.450 [2024-05-15 19:46:46.436661] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.450 [2024-05-15 19:46:46.436724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.450 [2024-05-15 19:46:46.436737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.450 [2024-05-15 19:46:46.436742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.450 [2024-05-15 19:46:46.436746] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.450 [2024-05-15 19:46:46.436757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.450 qpair failed and we were unable to recover it. 00:31:20.450 [2024-05-15 19:46:46.446699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.450 [2024-05-15 19:46:46.446762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.450 [2024-05-15 19:46:46.446775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.450 [2024-05-15 19:46:46.446781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.450 [2024-05-15 19:46:46.446785] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.450 [2024-05-15 19:46:46.446796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.450 qpair failed and we were unable to recover it. 00:31:20.450 [2024-05-15 19:46:46.456711] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.450 [2024-05-15 19:46:46.456776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.450 [2024-05-15 19:46:46.456789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.450 [2024-05-15 19:46:46.456794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.450 [2024-05-15 19:46:46.456798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.450 [2024-05-15 19:46:46.456809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.450 qpair failed and we were unable to recover it. 00:31:20.450 [2024-05-15 19:46:46.466737] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.450 [2024-05-15 19:46:46.466860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.450 [2024-05-15 19:46:46.466873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.450 [2024-05-15 19:46:46.466878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.450 [2024-05-15 19:46:46.466883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.450 [2024-05-15 19:46:46.466893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.450 qpair failed and we were unable to recover it. 00:31:20.450 [2024-05-15 19:46:46.476815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.450 [2024-05-15 19:46:46.476877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.451 [2024-05-15 19:46:46.476890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.451 [2024-05-15 19:46:46.476895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.451 [2024-05-15 19:46:46.476900] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.451 [2024-05-15 19:46:46.476910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.451 qpair failed and we were unable to recover it. 00:31:20.451 [2024-05-15 19:46:46.486791] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.451 [2024-05-15 19:46:46.486845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.451 [2024-05-15 19:46:46.486858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.451 [2024-05-15 19:46:46.486864] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.451 [2024-05-15 19:46:46.486868] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.451 [2024-05-15 19:46:46.486879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.451 qpair failed and we were unable to recover it. 00:31:20.451 [2024-05-15 19:46:46.496813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.451 [2024-05-15 19:46:46.496873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.451 [2024-05-15 19:46:46.496886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.451 [2024-05-15 19:46:46.496891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.451 [2024-05-15 19:46:46.496895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.451 [2024-05-15 19:46:46.496906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.451 qpair failed and we were unable to recover it. 00:31:20.451 [2024-05-15 19:46:46.506846] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.451 [2024-05-15 19:46:46.506906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.451 [2024-05-15 19:46:46.506921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.451 [2024-05-15 19:46:46.506927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.451 [2024-05-15 19:46:46.506931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.451 [2024-05-15 19:46:46.506942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.451 qpair failed and we were unable to recover it. 00:31:20.451 [2024-05-15 19:46:46.516877] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.451 [2024-05-15 19:46:46.516940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.451 [2024-05-15 19:46:46.516953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.451 [2024-05-15 19:46:46.516958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.451 [2024-05-15 19:46:46.516963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.451 [2024-05-15 19:46:46.516973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.451 qpair failed and we were unable to recover it. 00:31:20.451 [2024-05-15 19:46:46.526902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.451 [2024-05-15 19:46:46.526959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.451 [2024-05-15 19:46:46.526972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.451 [2024-05-15 19:46:46.526978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.451 [2024-05-15 19:46:46.526982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.451 [2024-05-15 19:46:46.526993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.451 qpair failed and we were unable to recover it. 00:31:20.451 [2024-05-15 19:46:46.536927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.451 [2024-05-15 19:46:46.536996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.451 [2024-05-15 19:46:46.537015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.451 [2024-05-15 19:46:46.537021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.451 [2024-05-15 19:46:46.537026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.451 [2024-05-15 19:46:46.537041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.451 qpair failed and we were unable to recover it. 00:31:20.451 [2024-05-15 19:46:46.546999] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.451 [2024-05-15 19:46:46.547077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.451 [2024-05-15 19:46:46.547096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.451 [2024-05-15 19:46:46.547102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.451 [2024-05-15 19:46:46.547108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.451 [2024-05-15 19:46:46.547125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.451 qpair failed and we were unable to recover it. 00:31:20.451 [2024-05-15 19:46:46.557044] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.451 [2024-05-15 19:46:46.557107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.451 [2024-05-15 19:46:46.557121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.451 [2024-05-15 19:46:46.557127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.451 [2024-05-15 19:46:46.557131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.451 [2024-05-15 19:46:46.557143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.451 qpair failed and we were unable to recover it. 00:31:20.451 [2024-05-15 19:46:46.567004] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.451 [2024-05-15 19:46:46.567062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.451 [2024-05-15 19:46:46.567075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.451 [2024-05-15 19:46:46.567081] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.451 [2024-05-15 19:46:46.567085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.451 [2024-05-15 19:46:46.567096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.451 qpair failed and we were unable to recover it. 00:31:20.451 [2024-05-15 19:46:46.577060] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.451 [2024-05-15 19:46:46.577116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.451 [2024-05-15 19:46:46.577129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.451 [2024-05-15 19:46:46.577135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.451 [2024-05-15 19:46:46.577139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.451 [2024-05-15 19:46:46.577150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.451 qpair failed and we were unable to recover it. 00:31:20.451 [2024-05-15 19:46:46.587061] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.451 [2024-05-15 19:46:46.587123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.451 [2024-05-15 19:46:46.587136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.451 [2024-05-15 19:46:46.587141] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.451 [2024-05-15 19:46:46.587145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.451 [2024-05-15 19:46:46.587156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.451 qpair failed and we were unable to recover it. 00:31:20.451 [2024-05-15 19:46:46.597122] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.451 [2024-05-15 19:46:46.597221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.451 [2024-05-15 19:46:46.597237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.451 [2024-05-15 19:46:46.597243] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.451 [2024-05-15 19:46:46.597247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.451 [2024-05-15 19:46:46.597259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.451 qpair failed and we were unable to recover it. 00:31:20.451 [2024-05-15 19:46:46.607115] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.451 [2024-05-15 19:46:46.607209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.451 [2024-05-15 19:46:46.607222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.451 [2024-05-15 19:46:46.607228] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.451 [2024-05-15 19:46:46.607232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.451 [2024-05-15 19:46:46.607244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.451 qpair failed and we were unable to recover it. 00:31:20.451 [2024-05-15 19:46:46.617163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.451 [2024-05-15 19:46:46.617257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.452 [2024-05-15 19:46:46.617271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.452 [2024-05-15 19:46:46.617276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.452 [2024-05-15 19:46:46.617280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.452 [2024-05-15 19:46:46.617291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.452 qpair failed and we were unable to recover it. 00:31:20.452 [2024-05-15 19:46:46.627176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.452 [2024-05-15 19:46:46.627234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.452 [2024-05-15 19:46:46.627247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.452 [2024-05-15 19:46:46.627252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.452 [2024-05-15 19:46:46.627257] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.452 [2024-05-15 19:46:46.627267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.452 qpair failed and we were unable to recover it. 00:31:20.715 [2024-05-15 19:46:46.637215] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.715 [2024-05-15 19:46:46.637285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.715 [2024-05-15 19:46:46.637297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.715 [2024-05-15 19:46:46.637303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.715 [2024-05-15 19:46:46.637311] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.715 [2024-05-15 19:46:46.637327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-05-15 19:46:46.647111] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.715 [2024-05-15 19:46:46.647180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.715 [2024-05-15 19:46:46.647193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.715 [2024-05-15 19:46:46.647198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.715 [2024-05-15 19:46:46.647203] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.715 [2024-05-15 19:46:46.647214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-05-15 19:46:46.657276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.715 [2024-05-15 19:46:46.657338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.715 [2024-05-15 19:46:46.657351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.715 [2024-05-15 19:46:46.657356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.715 [2024-05-15 19:46:46.657362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.715 [2024-05-15 19:46:46.657373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-05-15 19:46:46.667304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.715 [2024-05-15 19:46:46.667374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.715 [2024-05-15 19:46:46.667387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.715 [2024-05-15 19:46:46.667393] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.715 [2024-05-15 19:46:46.667398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.715 [2024-05-15 19:46:46.667408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-05-15 19:46:46.677321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.715 [2024-05-15 19:46:46.677386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.715 [2024-05-15 19:46:46.677398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.715 [2024-05-15 19:46:46.677404] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.715 [2024-05-15 19:46:46.677408] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.715 [2024-05-15 19:46:46.677419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-05-15 19:46:46.687334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.715 [2024-05-15 19:46:46.687402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.715 [2024-05-15 19:46:46.687415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.715 [2024-05-15 19:46:46.687421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.715 [2024-05-15 19:46:46.687427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.715 [2024-05-15 19:46:46.687438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-05-15 19:46:46.697472] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.715 [2024-05-15 19:46:46.697530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.715 [2024-05-15 19:46:46.697543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.715 [2024-05-15 19:46:46.697549] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.715 [2024-05-15 19:46:46.697553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.715 [2024-05-15 19:46:46.697564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-05-15 19:46:46.707439] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.715 [2024-05-15 19:46:46.707500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.715 [2024-05-15 19:46:46.707513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.715 [2024-05-15 19:46:46.707518] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.715 [2024-05-15 19:46:46.707523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.715 [2024-05-15 19:46:46.707534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-05-15 19:46:46.717543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.715 [2024-05-15 19:46:46.717614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.715 [2024-05-15 19:46:46.717627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.715 [2024-05-15 19:46:46.717632] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.715 [2024-05-15 19:46:46.717637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.715 [2024-05-15 19:46:46.717648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-05-15 19:46:46.727458] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.715 [2024-05-15 19:46:46.727521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.715 [2024-05-15 19:46:46.727534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.715 [2024-05-15 19:46:46.727542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.715 [2024-05-15 19:46:46.727546] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.715 [2024-05-15 19:46:46.727557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-05-15 19:46:46.737480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.715 [2024-05-15 19:46:46.737571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.715 [2024-05-15 19:46:46.737584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.715 [2024-05-15 19:46:46.737589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.715 [2024-05-15 19:46:46.737593] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.716 [2024-05-15 19:46:46.737605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-05-15 19:46:46.747547] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.716 [2024-05-15 19:46:46.747658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.716 [2024-05-15 19:46:46.747671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.716 [2024-05-15 19:46:46.747677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.716 [2024-05-15 19:46:46.747682] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.716 [2024-05-15 19:46:46.747692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-05-15 19:46:46.757585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.716 [2024-05-15 19:46:46.757703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.716 [2024-05-15 19:46:46.757716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.716 [2024-05-15 19:46:46.757721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.716 [2024-05-15 19:46:46.757726] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.716 [2024-05-15 19:46:46.757737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-05-15 19:46:46.767632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.716 [2024-05-15 19:46:46.767695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.716 [2024-05-15 19:46:46.767708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.716 [2024-05-15 19:46:46.767713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.716 [2024-05-15 19:46:46.767718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.716 [2024-05-15 19:46:46.767729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-05-15 19:46:46.777571] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.716 [2024-05-15 19:46:46.777645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.716 [2024-05-15 19:46:46.777657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.716 [2024-05-15 19:46:46.777662] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.716 [2024-05-15 19:46:46.777666] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.716 [2024-05-15 19:46:46.777677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-05-15 19:46:46.787639] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.716 [2024-05-15 19:46:46.787701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.716 [2024-05-15 19:46:46.787713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.716 [2024-05-15 19:46:46.787719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.716 [2024-05-15 19:46:46.787723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.716 [2024-05-15 19:46:46.787734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-05-15 19:46:46.797533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.716 [2024-05-15 19:46:46.797625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.716 [2024-05-15 19:46:46.797637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.716 [2024-05-15 19:46:46.797643] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.716 [2024-05-15 19:46:46.797648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.716 [2024-05-15 19:46:46.797658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-05-15 19:46:46.807682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.716 [2024-05-15 19:46:46.807743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.716 [2024-05-15 19:46:46.807756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.716 [2024-05-15 19:46:46.807761] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.716 [2024-05-15 19:46:46.807765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.716 [2024-05-15 19:46:46.807776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-05-15 19:46:46.817694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.716 [2024-05-15 19:46:46.817765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.716 [2024-05-15 19:46:46.817777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.716 [2024-05-15 19:46:46.817785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.716 [2024-05-15 19:46:46.817790] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.716 [2024-05-15 19:46:46.817800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-05-15 19:46:46.827695] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.716 [2024-05-15 19:46:46.827756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.716 [2024-05-15 19:46:46.827768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.716 [2024-05-15 19:46:46.827773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.716 [2024-05-15 19:46:46.827778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.716 [2024-05-15 19:46:46.827789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-05-15 19:46:46.837752] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.716 [2024-05-15 19:46:46.837818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.716 [2024-05-15 19:46:46.837831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.716 [2024-05-15 19:46:46.837836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.716 [2024-05-15 19:46:46.837840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.716 [2024-05-15 19:46:46.837853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-05-15 19:46:46.847781] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.716 [2024-05-15 19:46:46.847871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.716 [2024-05-15 19:46:46.847884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.716 [2024-05-15 19:46:46.847889] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.716 [2024-05-15 19:46:46.847894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.716 [2024-05-15 19:46:46.847904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-05-15 19:46:46.857692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.716 [2024-05-15 19:46:46.857749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.716 [2024-05-15 19:46:46.857762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.716 [2024-05-15 19:46:46.857767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.716 [2024-05-15 19:46:46.857771] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.716 [2024-05-15 19:46:46.857782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-05-15 19:46:46.867838] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.717 [2024-05-15 19:46:46.867903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.717 [2024-05-15 19:46:46.867916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.717 [2024-05-15 19:46:46.867921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.717 [2024-05-15 19:46:46.867927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.717 [2024-05-15 19:46:46.867938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-05-15 19:46:46.877874] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.717 [2024-05-15 19:46:46.877940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.717 [2024-05-15 19:46:46.877953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.717 [2024-05-15 19:46:46.877959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.717 [2024-05-15 19:46:46.877964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.717 [2024-05-15 19:46:46.877974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-05-15 19:46:46.887879] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.717 [2024-05-15 19:46:46.887941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.717 [2024-05-15 19:46:46.887953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.717 [2024-05-15 19:46:46.887959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.717 [2024-05-15 19:46:46.887963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.717 [2024-05-15 19:46:46.887974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-05-15 19:46:46.897915] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.717 [2024-05-15 19:46:46.897974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.717 [2024-05-15 19:46:46.897987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.717 [2024-05-15 19:46:46.897992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.717 [2024-05-15 19:46:46.897997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.717 [2024-05-15 19:46:46.898007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.980 [2024-05-15 19:46:46.907954] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.980 [2024-05-15 19:46:46.908016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.980 [2024-05-15 19:46:46.908031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.980 [2024-05-15 19:46:46.908036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.981 [2024-05-15 19:46:46.908041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.981 [2024-05-15 19:46:46.908051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.981 qpair failed and we were unable to recover it. 00:31:20.981 [2024-05-15 19:46:46.917966] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.981 [2024-05-15 19:46:46.918032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.981 [2024-05-15 19:46:46.918045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.981 [2024-05-15 19:46:46.918050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.981 [2024-05-15 19:46:46.918055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.981 [2024-05-15 19:46:46.918065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.981 qpair failed and we were unable to recover it. 00:31:20.981 [2024-05-15 19:46:46.928015] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.981 [2024-05-15 19:46:46.928073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.981 [2024-05-15 19:46:46.928086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.981 [2024-05-15 19:46:46.928091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.981 [2024-05-15 19:46:46.928096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.981 [2024-05-15 19:46:46.928106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.981 qpair failed and we were unable to recover it. 00:31:20.981 [2024-05-15 19:46:46.938042] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.981 [2024-05-15 19:46:46.938135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.981 [2024-05-15 19:46:46.938148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.981 [2024-05-15 19:46:46.938154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.981 [2024-05-15 19:46:46.938158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.981 [2024-05-15 19:46:46.938169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.981 qpair failed and we were unable to recover it. 00:31:20.981 [2024-05-15 19:46:46.948115] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.981 [2024-05-15 19:46:46.948176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.981 [2024-05-15 19:46:46.948189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.981 [2024-05-15 19:46:46.948194] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.981 [2024-05-15 19:46:46.948198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.981 [2024-05-15 19:46:46.948211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.981 qpair failed and we were unable to recover it. 00:31:20.981 [2024-05-15 19:46:46.958080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.981 [2024-05-15 19:46:46.958146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.981 [2024-05-15 19:46:46.958159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.981 [2024-05-15 19:46:46.958164] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.981 [2024-05-15 19:46:46.958168] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.981 [2024-05-15 19:46:46.958179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.981 qpair failed and we were unable to recover it. 00:31:20.981 [2024-05-15 19:46:46.968105] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.981 [2024-05-15 19:46:46.968210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.981 [2024-05-15 19:46:46.968223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.981 [2024-05-15 19:46:46.968229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.981 [2024-05-15 19:46:46.968234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.981 [2024-05-15 19:46:46.968245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.981 qpair failed and we were unable to recover it. 00:31:20.981 [2024-05-15 19:46:46.978136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.981 [2024-05-15 19:46:46.978227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.981 [2024-05-15 19:46:46.978240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.981 [2024-05-15 19:46:46.978245] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.981 [2024-05-15 19:46:46.978250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.981 [2024-05-15 19:46:46.978261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.981 qpair failed and we were unable to recover it. 00:31:20.981 [2024-05-15 19:46:46.988195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.981 [2024-05-15 19:46:46.988255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.981 [2024-05-15 19:46:46.988268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.981 [2024-05-15 19:46:46.988273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.981 [2024-05-15 19:46:46.988278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.981 [2024-05-15 19:46:46.988289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.981 qpair failed and we were unable to recover it. 00:31:20.981 [2024-05-15 19:46:46.998086] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.981 [2024-05-15 19:46:46.998157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.981 [2024-05-15 19:46:46.998173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.981 [2024-05-15 19:46:46.998179] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.981 [2024-05-15 19:46:46.998183] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.981 [2024-05-15 19:46:46.998194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.981 qpair failed and we were unable to recover it. 00:31:20.981 [2024-05-15 19:46:47.008122] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.981 [2024-05-15 19:46:47.008187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.981 [2024-05-15 19:46:47.008200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.981 [2024-05-15 19:46:47.008205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.981 [2024-05-15 19:46:47.008210] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.981 [2024-05-15 19:46:47.008221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.981 qpair failed and we were unable to recover it. 00:31:20.981 [2024-05-15 19:46:47.018228] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.981 [2024-05-15 19:46:47.018292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.981 [2024-05-15 19:46:47.018305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.981 [2024-05-15 19:46:47.018310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.981 [2024-05-15 19:46:47.018320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.981 [2024-05-15 19:46:47.018331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-05-15 19:46:47.028297] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.982 [2024-05-15 19:46:47.028382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.982 [2024-05-15 19:46:47.028395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.982 [2024-05-15 19:46:47.028400] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.982 [2024-05-15 19:46:47.028404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.982 [2024-05-15 19:46:47.028416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-05-15 19:46:47.038348] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.982 [2024-05-15 19:46:47.038411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.982 [2024-05-15 19:46:47.038423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.982 [2024-05-15 19:46:47.038428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.982 [2024-05-15 19:46:47.038435] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.982 [2024-05-15 19:46:47.038447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-05-15 19:46:47.048350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.982 [2024-05-15 19:46:47.048409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.982 [2024-05-15 19:46:47.048422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.982 [2024-05-15 19:46:47.048427] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.982 [2024-05-15 19:46:47.048432] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.982 [2024-05-15 19:46:47.048442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-05-15 19:46:47.058371] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.982 [2024-05-15 19:46:47.058428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.982 [2024-05-15 19:46:47.058441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.982 [2024-05-15 19:46:47.058446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.982 [2024-05-15 19:46:47.058451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.982 [2024-05-15 19:46:47.058462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-05-15 19:46:47.068409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.982 [2024-05-15 19:46:47.068470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.982 [2024-05-15 19:46:47.068483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.982 [2024-05-15 19:46:47.068488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.982 [2024-05-15 19:46:47.068492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.982 [2024-05-15 19:46:47.068503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-05-15 19:46:47.078424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.982 [2024-05-15 19:46:47.078487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.982 [2024-05-15 19:46:47.078499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.982 [2024-05-15 19:46:47.078505] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.982 [2024-05-15 19:46:47.078509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.982 [2024-05-15 19:46:47.078520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-05-15 19:46:47.088343] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.982 [2024-05-15 19:46:47.088407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.982 [2024-05-15 19:46:47.088420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.982 [2024-05-15 19:46:47.088426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.982 [2024-05-15 19:46:47.088430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.982 [2024-05-15 19:46:47.088441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-05-15 19:46:47.098507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.982 [2024-05-15 19:46:47.098565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.982 [2024-05-15 19:46:47.098578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.982 [2024-05-15 19:46:47.098583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.982 [2024-05-15 19:46:47.098588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.982 [2024-05-15 19:46:47.098598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-05-15 19:46:47.108395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.982 [2024-05-15 19:46:47.108454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.982 [2024-05-15 19:46:47.108466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.982 [2024-05-15 19:46:47.108472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.982 [2024-05-15 19:46:47.108476] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.982 [2024-05-15 19:46:47.108487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-05-15 19:46:47.118437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.982 [2024-05-15 19:46:47.118537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.982 [2024-05-15 19:46:47.118550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.982 [2024-05-15 19:46:47.118555] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.982 [2024-05-15 19:46:47.118560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.982 [2024-05-15 19:46:47.118571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-05-15 19:46:47.128572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.982 [2024-05-15 19:46:47.128662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.982 [2024-05-15 19:46:47.128675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.982 [2024-05-15 19:46:47.128680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.982 [2024-05-15 19:46:47.128688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.982 [2024-05-15 19:46:47.128698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-05-15 19:46:47.138598] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.982 [2024-05-15 19:46:47.138660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.982 [2024-05-15 19:46:47.138673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.982 [2024-05-15 19:46:47.138679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.982 [2024-05-15 19:46:47.138683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.982 [2024-05-15 19:46:47.138694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-05-15 19:46:47.148510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.982 [2024-05-15 19:46:47.148573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.982 [2024-05-15 19:46:47.148586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.982 [2024-05-15 19:46:47.148591] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.982 [2024-05-15 19:46:47.148596] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.982 [2024-05-15 19:46:47.148607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.982 qpair failed and we were unable to recover it. 00:31:20.982 [2024-05-15 19:46:47.158640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:20.982 [2024-05-15 19:46:47.158706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:20.982 [2024-05-15 19:46:47.158718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:20.982 [2024-05-15 19:46:47.158724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:20.983 [2024-05-15 19:46:47.158728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:20.983 [2024-05-15 19:46:47.158739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.983 qpair failed and we were unable to recover it. 00:31:21.246 [2024-05-15 19:46:47.168728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.246 [2024-05-15 19:46:47.168838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.246 [2024-05-15 19:46:47.168851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.246 [2024-05-15 19:46:47.168856] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.246 [2024-05-15 19:46:47.168861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.246 [2024-05-15 19:46:47.168872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.246 qpair failed and we were unable to recover it. 00:31:21.246 [2024-05-15 19:46:47.178698] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.246 [2024-05-15 19:46:47.178755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.246 [2024-05-15 19:46:47.178768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.246 [2024-05-15 19:46:47.178773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.246 [2024-05-15 19:46:47.178777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.246 [2024-05-15 19:46:47.178789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.246 qpair failed and we were unable to recover it. 00:31:21.246 [2024-05-15 19:46:47.188791] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.246 [2024-05-15 19:46:47.188854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.246 [2024-05-15 19:46:47.188867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.246 [2024-05-15 19:46:47.188872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.246 [2024-05-15 19:46:47.188877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.246 [2024-05-15 19:46:47.188888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.246 qpair failed and we were unable to recover it. 00:31:21.246 [2024-05-15 19:46:47.198768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.246 [2024-05-15 19:46:47.198836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.246 [2024-05-15 19:46:47.198849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.246 [2024-05-15 19:46:47.198854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.246 [2024-05-15 19:46:47.198858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.246 [2024-05-15 19:46:47.198869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.246 qpair failed and we were unable to recover it. 00:31:21.246 [2024-05-15 19:46:47.208807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.246 [2024-05-15 19:46:47.208865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.246 [2024-05-15 19:46:47.208878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.246 [2024-05-15 19:46:47.208883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.246 [2024-05-15 19:46:47.208888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.246 [2024-05-15 19:46:47.208900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.246 qpair failed and we were unable to recover it. 00:31:21.246 [2024-05-15 19:46:47.218857] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.246 [2024-05-15 19:46:47.218916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.246 [2024-05-15 19:46:47.218928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.246 [2024-05-15 19:46:47.218940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.246 [2024-05-15 19:46:47.218944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.246 [2024-05-15 19:46:47.218955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.246 qpair failed and we were unable to recover it. 00:31:21.246 [2024-05-15 19:46:47.228826] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.246 [2024-05-15 19:46:47.228887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.246 [2024-05-15 19:46:47.228900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.246 [2024-05-15 19:46:47.228905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.246 [2024-05-15 19:46:47.228910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.246 [2024-05-15 19:46:47.228920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.246 qpair failed and we were unable to recover it. 00:31:21.246 [2024-05-15 19:46:47.238922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.246 [2024-05-15 19:46:47.238992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.246 [2024-05-15 19:46:47.239004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.246 [2024-05-15 19:46:47.239010] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.246 [2024-05-15 19:46:47.239014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.246 [2024-05-15 19:46:47.239025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.246 qpair failed and we were unable to recover it. 00:31:21.246 [2024-05-15 19:46:47.248902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.246 [2024-05-15 19:46:47.248963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.246 [2024-05-15 19:46:47.248982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.246 [2024-05-15 19:46:47.248988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.246 [2024-05-15 19:46:47.248993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.246 [2024-05-15 19:46:47.249007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.246 qpair failed and we were unable to recover it. 00:31:21.246 [2024-05-15 19:46:47.258956] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.246 [2024-05-15 19:46:47.259020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.246 [2024-05-15 19:46:47.259038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.246 [2024-05-15 19:46:47.259045] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.246 [2024-05-15 19:46:47.259050] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.246 [2024-05-15 19:46:47.259064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.246 qpair failed and we were unable to recover it. 00:31:21.246 [2024-05-15 19:46:47.268978] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.246 [2024-05-15 19:46:47.269076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.246 [2024-05-15 19:46:47.269095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.246 [2024-05-15 19:46:47.269102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.246 [2024-05-15 19:46:47.269107] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.247 [2024-05-15 19:46:47.269121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-05-15 19:46:47.278978] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.247 [2024-05-15 19:46:47.279058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.247 [2024-05-15 19:46:47.279074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.247 [2024-05-15 19:46:47.279080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.247 [2024-05-15 19:46:47.279085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.247 [2024-05-15 19:46:47.279097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-05-15 19:46:47.288982] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.247 [2024-05-15 19:46:47.289050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.247 [2024-05-15 19:46:47.289064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.247 [2024-05-15 19:46:47.289069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.247 [2024-05-15 19:46:47.289073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.247 [2024-05-15 19:46:47.289085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-05-15 19:46:47.299033] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.247 [2024-05-15 19:46:47.299089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.247 [2024-05-15 19:46:47.299102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.247 [2024-05-15 19:46:47.299107] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.247 [2024-05-15 19:46:47.299112] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.247 [2024-05-15 19:46:47.299123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-05-15 19:46:47.309091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.247 [2024-05-15 19:46:47.309154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.247 [2024-05-15 19:46:47.309170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.247 [2024-05-15 19:46:47.309177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.247 [2024-05-15 19:46:47.309181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.247 [2024-05-15 19:46:47.309192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-05-15 19:46:47.319085] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.247 [2024-05-15 19:46:47.319149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.247 [2024-05-15 19:46:47.319163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.247 [2024-05-15 19:46:47.319168] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.247 [2024-05-15 19:46:47.319173] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.247 [2024-05-15 19:46:47.319184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-05-15 19:46:47.329126] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.247 [2024-05-15 19:46:47.329208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.247 [2024-05-15 19:46:47.329221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.247 [2024-05-15 19:46:47.329227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.247 [2024-05-15 19:46:47.329232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.247 [2024-05-15 19:46:47.329243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-05-15 19:46:47.339135] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.247 [2024-05-15 19:46:47.339192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.247 [2024-05-15 19:46:47.339205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.247 [2024-05-15 19:46:47.339210] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.247 [2024-05-15 19:46:47.339215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.247 [2024-05-15 19:46:47.339226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-05-15 19:46:47.349196] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.247 [2024-05-15 19:46:47.349259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.247 [2024-05-15 19:46:47.349272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.247 [2024-05-15 19:46:47.349277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.247 [2024-05-15 19:46:47.349282] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.247 [2024-05-15 19:46:47.349296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-05-15 19:46:47.359161] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.247 [2024-05-15 19:46:47.359265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.247 [2024-05-15 19:46:47.359278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.247 [2024-05-15 19:46:47.359283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.247 [2024-05-15 19:46:47.359287] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.247 [2024-05-15 19:46:47.359299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-05-15 19:46:47.369267] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.247 [2024-05-15 19:46:47.369328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.247 [2024-05-15 19:46:47.369341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.247 [2024-05-15 19:46:47.369346] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.247 [2024-05-15 19:46:47.369351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.247 [2024-05-15 19:46:47.369362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-05-15 19:46:47.379268] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.247 [2024-05-15 19:46:47.379329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.247 [2024-05-15 19:46:47.379342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.247 [2024-05-15 19:46:47.379347] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.247 [2024-05-15 19:46:47.379352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.247 [2024-05-15 19:46:47.379362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-05-15 19:46:47.389289] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.247 [2024-05-15 19:46:47.389386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.247 [2024-05-15 19:46:47.389400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.247 [2024-05-15 19:46:47.389405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.247 [2024-05-15 19:46:47.389409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.247 [2024-05-15 19:46:47.389420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-05-15 19:46:47.399325] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.247 [2024-05-15 19:46:47.399397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.247 [2024-05-15 19:46:47.399413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.247 [2024-05-15 19:46:47.399418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.247 [2024-05-15 19:46:47.399422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.247 [2024-05-15 19:46:47.399433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-05-15 19:46:47.409368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.247 [2024-05-15 19:46:47.409431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.247 [2024-05-15 19:46:47.409444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.247 [2024-05-15 19:46:47.409449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.248 [2024-05-15 19:46:47.409454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.248 [2024-05-15 19:46:47.409464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-05-15 19:46:47.419384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.248 [2024-05-15 19:46:47.419480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.248 [2024-05-15 19:46:47.419492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.248 [2024-05-15 19:46:47.419497] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.248 [2024-05-15 19:46:47.419501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.248 [2024-05-15 19:46:47.419512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.511 [2024-05-15 19:46:47.429460] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.511 [2024-05-15 19:46:47.429518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.511 [2024-05-15 19:46:47.429531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.511 [2024-05-15 19:46:47.429536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.511 [2024-05-15 19:46:47.429541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.511 [2024-05-15 19:46:47.429551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.511 qpair failed and we were unable to recover it. 00:31:21.511 [2024-05-15 19:46:47.439398] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.511 [2024-05-15 19:46:47.439462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.511 [2024-05-15 19:46:47.439474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.511 [2024-05-15 19:46:47.439480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.511 [2024-05-15 19:46:47.439488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.511 [2024-05-15 19:46:47.439499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.511 qpair failed and we were unable to recover it. 00:31:21.511 [2024-05-15 19:46:47.449504] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.511 [2024-05-15 19:46:47.449561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.511 [2024-05-15 19:46:47.449574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.511 [2024-05-15 19:46:47.449579] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.511 [2024-05-15 19:46:47.449583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.511 [2024-05-15 19:46:47.449594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.511 qpair failed and we were unable to recover it. 00:31:21.511 [2024-05-15 19:46:47.459457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.511 [2024-05-15 19:46:47.459554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.511 [2024-05-15 19:46:47.459567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.511 [2024-05-15 19:46:47.459573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.511 [2024-05-15 19:46:47.459578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.511 [2024-05-15 19:46:47.459589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.511 qpair failed and we were unable to recover it. 00:31:21.511 [2024-05-15 19:46:47.469505] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.511 [2024-05-15 19:46:47.469567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.511 [2024-05-15 19:46:47.469579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.511 [2024-05-15 19:46:47.469585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.511 [2024-05-15 19:46:47.469590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.511 [2024-05-15 19:46:47.469600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.511 qpair failed and we were unable to recover it. 00:31:21.511 [2024-05-15 19:46:47.479529] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.511 [2024-05-15 19:46:47.479588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.511 [2024-05-15 19:46:47.479600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.511 [2024-05-15 19:46:47.479606] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.511 [2024-05-15 19:46:47.479610] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.511 [2024-05-15 19:46:47.479621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.511 qpair failed and we were unable to recover it. 00:31:21.511 [2024-05-15 19:46:47.489425] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.511 [2024-05-15 19:46:47.489531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.511 [2024-05-15 19:46:47.489544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.511 [2024-05-15 19:46:47.489550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.511 [2024-05-15 19:46:47.489554] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.511 [2024-05-15 19:46:47.489565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.511 qpair failed and we were unable to recover it. 00:31:21.511 [2024-05-15 19:46:47.499626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.511 [2024-05-15 19:46:47.499689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.511 [2024-05-15 19:46:47.499701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.511 [2024-05-15 19:46:47.499706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.511 [2024-05-15 19:46:47.499711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.511 [2024-05-15 19:46:47.499721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.511 qpair failed and we were unable to recover it. 00:31:21.511 [2024-05-15 19:46:47.509649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.511 [2024-05-15 19:46:47.509727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.511 [2024-05-15 19:46:47.509742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.511 [2024-05-15 19:46:47.509747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.511 [2024-05-15 19:46:47.509752] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.511 [2024-05-15 19:46:47.509764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.511 qpair failed and we were unable to recover it. 00:31:21.511 [2024-05-15 19:46:47.519553] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.511 [2024-05-15 19:46:47.519649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.511 [2024-05-15 19:46:47.519662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.511 [2024-05-15 19:46:47.519667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.511 [2024-05-15 19:46:47.519672] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.511 [2024-05-15 19:46:47.519683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.511 qpair failed and we were unable to recover it. 00:31:21.511 [2024-05-15 19:46:47.529673] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.511 [2024-05-15 19:46:47.529765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.511 [2024-05-15 19:46:47.529778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.511 [2024-05-15 19:46:47.529783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.511 [2024-05-15 19:46:47.529790] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.511 [2024-05-15 19:46:47.529801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.511 qpair failed and we were unable to recover it. 00:31:21.511 [2024-05-15 19:46:47.539710] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.511 [2024-05-15 19:46:47.539766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.512 [2024-05-15 19:46:47.539779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.512 [2024-05-15 19:46:47.539784] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.512 [2024-05-15 19:46:47.539789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.512 [2024-05-15 19:46:47.539799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.512 qpair failed and we were unable to recover it. 00:31:21.512 [2024-05-15 19:46:47.549761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.512 [2024-05-15 19:46:47.549876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.512 [2024-05-15 19:46:47.549890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.512 [2024-05-15 19:46:47.549895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.512 [2024-05-15 19:46:47.549899] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.512 [2024-05-15 19:46:47.549910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.512 qpair failed and we were unable to recover it. 00:31:21.512 [2024-05-15 19:46:47.559670] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.512 [2024-05-15 19:46:47.559733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.512 [2024-05-15 19:46:47.559746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.512 [2024-05-15 19:46:47.559751] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.512 [2024-05-15 19:46:47.559755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.512 [2024-05-15 19:46:47.559766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.512 qpair failed and we were unable to recover it. 00:31:21.512 [2024-05-15 19:46:47.569804] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.512 [2024-05-15 19:46:47.569892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.512 [2024-05-15 19:46:47.569906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.512 [2024-05-15 19:46:47.569911] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.512 [2024-05-15 19:46:47.569916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.512 [2024-05-15 19:46:47.569926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.512 qpair failed and we were unable to recover it. 00:31:21.512 [2024-05-15 19:46:47.579771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.512 [2024-05-15 19:46:47.579832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.512 [2024-05-15 19:46:47.579845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.512 [2024-05-15 19:46:47.579850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.512 [2024-05-15 19:46:47.579854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.512 [2024-05-15 19:46:47.579865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.512 qpair failed and we were unable to recover it. 00:31:21.512 [2024-05-15 19:46:47.589823] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.512 [2024-05-15 19:46:47.589884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.512 [2024-05-15 19:46:47.589896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.512 [2024-05-15 19:46:47.589901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.512 [2024-05-15 19:46:47.589906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.512 [2024-05-15 19:46:47.589916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.512 qpair failed and we were unable to recover it. 00:31:21.512 [2024-05-15 19:46:47.599845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.512 [2024-05-15 19:46:47.599910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.512 [2024-05-15 19:46:47.599922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.512 [2024-05-15 19:46:47.599927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.512 [2024-05-15 19:46:47.599931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.512 [2024-05-15 19:46:47.599942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.512 qpair failed and we were unable to recover it. 00:31:21.512 [2024-05-15 19:46:47.609885] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.512 [2024-05-15 19:46:47.609947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.512 [2024-05-15 19:46:47.609960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.512 [2024-05-15 19:46:47.609965] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.512 [2024-05-15 19:46:47.609969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.512 [2024-05-15 19:46:47.609979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.512 qpair failed and we were unable to recover it. 00:31:21.512 [2024-05-15 19:46:47.619908] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.512 [2024-05-15 19:46:47.619978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.512 [2024-05-15 19:46:47.619997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.512 [2024-05-15 19:46:47.620007] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.512 [2024-05-15 19:46:47.620012] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.512 [2024-05-15 19:46:47.620026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.512 qpair failed and we were unable to recover it. 00:31:21.512 [2024-05-15 19:46:47.629958] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.512 [2024-05-15 19:46:47.630024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.512 [2024-05-15 19:46:47.630043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.512 [2024-05-15 19:46:47.630050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.512 [2024-05-15 19:46:47.630055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.512 [2024-05-15 19:46:47.630068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.512 qpair failed and we were unable to recover it. 00:31:21.512 [2024-05-15 19:46:47.640004] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.512 [2024-05-15 19:46:47.640118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.512 [2024-05-15 19:46:47.640137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.512 [2024-05-15 19:46:47.640143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.512 [2024-05-15 19:46:47.640148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.512 [2024-05-15 19:46:47.640162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.512 qpair failed and we were unable to recover it. 00:31:21.512 [2024-05-15 19:46:47.649998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.512 [2024-05-15 19:46:47.650089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.512 [2024-05-15 19:46:47.650109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.512 [2024-05-15 19:46:47.650115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.512 [2024-05-15 19:46:47.650120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.512 [2024-05-15 19:46:47.650134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.512 qpair failed and we were unable to recover it. 00:31:21.512 [2024-05-15 19:46:47.659969] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.512 [2024-05-15 19:46:47.660027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.512 [2024-05-15 19:46:47.660041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.512 [2024-05-15 19:46:47.660046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.512 [2024-05-15 19:46:47.660051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.512 [2024-05-15 19:46:47.660062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.512 qpair failed and we were unable to recover it. 00:31:21.512 [2024-05-15 19:46:47.670011] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.512 [2024-05-15 19:46:47.670074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.512 [2024-05-15 19:46:47.670087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.512 [2024-05-15 19:46:47.670092] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.512 [2024-05-15 19:46:47.670097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.512 [2024-05-15 19:46:47.670108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.512 qpair failed and we were unable to recover it. 00:31:21.512 [2024-05-15 19:46:47.680178] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.512 [2024-05-15 19:46:47.680250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.513 [2024-05-15 19:46:47.680269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.513 [2024-05-15 19:46:47.680275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.513 [2024-05-15 19:46:47.680280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.513 [2024-05-15 19:46:47.680294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.513 qpair failed and we were unable to recover it. 00:31:21.513 [2024-05-15 19:46:47.690100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.513 [2024-05-15 19:46:47.690158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.513 [2024-05-15 19:46:47.690171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.513 [2024-05-15 19:46:47.690177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.513 [2024-05-15 19:46:47.690182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.513 [2024-05-15 19:46:47.690193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.513 qpair failed and we were unable to recover it. 00:31:21.776 [2024-05-15 19:46:47.700140] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.776 [2024-05-15 19:46:47.700202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.776 [2024-05-15 19:46:47.700215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.776 [2024-05-15 19:46:47.700221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.776 [2024-05-15 19:46:47.700225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.776 [2024-05-15 19:46:47.700236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.776 qpair failed and we were unable to recover it. 00:31:21.776 [2024-05-15 19:46:47.710188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.776 [2024-05-15 19:46:47.710259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.776 [2024-05-15 19:46:47.710276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.776 [2024-05-15 19:46:47.710281] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.776 [2024-05-15 19:46:47.710286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.776 [2024-05-15 19:46:47.710297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.776 qpair failed and we were unable to recover it. 00:31:21.776 [2024-05-15 19:46:47.720197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.776 [2024-05-15 19:46:47.720263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.776 [2024-05-15 19:46:47.720276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.776 [2024-05-15 19:46:47.720281] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.776 [2024-05-15 19:46:47.720285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.776 [2024-05-15 19:46:47.720296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.776 qpair failed and we were unable to recover it. 00:31:21.776 [2024-05-15 19:46:47.730214] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.776 [2024-05-15 19:46:47.730279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.776 [2024-05-15 19:46:47.730292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.776 [2024-05-15 19:46:47.730297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.776 [2024-05-15 19:46:47.730302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.776 [2024-05-15 19:46:47.730316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.776 qpair failed and we were unable to recover it. 00:31:21.776 [2024-05-15 19:46:47.740271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.776 [2024-05-15 19:46:47.740345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.776 [2024-05-15 19:46:47.740358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.776 [2024-05-15 19:46:47.740364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.776 [2024-05-15 19:46:47.740368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.776 [2024-05-15 19:46:47.740380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.776 qpair failed and we were unable to recover it. 00:31:21.776 [2024-05-15 19:46:47.750277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.776 [2024-05-15 19:46:47.750373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.776 [2024-05-15 19:46:47.750385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.776 [2024-05-15 19:46:47.750392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.776 [2024-05-15 19:46:47.750397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.777 [2024-05-15 19:46:47.750411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.777 qpair failed and we were unable to recover it. 00:31:21.777 [2024-05-15 19:46:47.760304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.777 [2024-05-15 19:46:47.760383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.777 [2024-05-15 19:46:47.760396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.777 [2024-05-15 19:46:47.760401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.777 [2024-05-15 19:46:47.760406] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.777 [2024-05-15 19:46:47.760417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.777 qpair failed and we were unable to recover it. 00:31:21.777 [2024-05-15 19:46:47.770361] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.777 [2024-05-15 19:46:47.770439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.777 [2024-05-15 19:46:47.770452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.777 [2024-05-15 19:46:47.770458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.777 [2024-05-15 19:46:47.770463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.777 [2024-05-15 19:46:47.770474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.777 qpair failed and we were unable to recover it. 00:31:21.777 [2024-05-15 19:46:47.780346] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.777 [2024-05-15 19:46:47.780403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.777 [2024-05-15 19:46:47.780415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.777 [2024-05-15 19:46:47.780420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.777 [2024-05-15 19:46:47.780425] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.777 [2024-05-15 19:46:47.780436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.777 qpair failed and we were unable to recover it. 00:31:21.777 [2024-05-15 19:46:47.790265] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.777 [2024-05-15 19:46:47.790341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.777 [2024-05-15 19:46:47.790354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.777 [2024-05-15 19:46:47.790359] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.777 [2024-05-15 19:46:47.790364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.777 [2024-05-15 19:46:47.790375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.777 qpair failed and we were unable to recover it. 00:31:21.777 [2024-05-15 19:46:47.800427] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.777 [2024-05-15 19:46:47.800491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.777 [2024-05-15 19:46:47.800506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.777 [2024-05-15 19:46:47.800512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.777 [2024-05-15 19:46:47.800516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.777 [2024-05-15 19:46:47.800527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.777 qpair failed and we were unable to recover it. 00:31:21.777 [2024-05-15 19:46:47.810451] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.777 [2024-05-15 19:46:47.810508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.777 [2024-05-15 19:46:47.810521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.777 [2024-05-15 19:46:47.810526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.777 [2024-05-15 19:46:47.810530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.777 [2024-05-15 19:46:47.810541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.777 qpair failed and we were unable to recover it. 00:31:21.777 [2024-05-15 19:46:47.820562] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.777 [2024-05-15 19:46:47.820626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.777 [2024-05-15 19:46:47.820639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.777 [2024-05-15 19:46:47.820644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.777 [2024-05-15 19:46:47.820648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.777 [2024-05-15 19:46:47.820660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.777 qpair failed and we were unable to recover it. 00:31:21.777 [2024-05-15 19:46:47.830508] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.777 [2024-05-15 19:46:47.830570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.777 [2024-05-15 19:46:47.830583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.777 [2024-05-15 19:46:47.830588] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.777 [2024-05-15 19:46:47.830592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.777 [2024-05-15 19:46:47.830603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.777 qpair failed and we were unable to recover it. 00:31:21.777 [2024-05-15 19:46:47.840450] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.777 [2024-05-15 19:46:47.840551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.777 [2024-05-15 19:46:47.840564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.777 [2024-05-15 19:46:47.840569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.777 [2024-05-15 19:46:47.840574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.777 [2024-05-15 19:46:47.840589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.777 qpair failed and we were unable to recover it. 00:31:21.777 [2024-05-15 19:46:47.850552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.777 [2024-05-15 19:46:47.850608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.777 [2024-05-15 19:46:47.850621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.777 [2024-05-15 19:46:47.850626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.777 [2024-05-15 19:46:47.850630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.777 [2024-05-15 19:46:47.850641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.777 qpair failed and we were unable to recover it. 00:31:21.777 [2024-05-15 19:46:47.860632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.777 [2024-05-15 19:46:47.860695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.777 [2024-05-15 19:46:47.860707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.777 [2024-05-15 19:46:47.860712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.777 [2024-05-15 19:46:47.860717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.777 [2024-05-15 19:46:47.860727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.777 qpair failed and we were unable to recover it. 00:31:21.777 [2024-05-15 19:46:47.870649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.777 [2024-05-15 19:46:47.870740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.777 [2024-05-15 19:46:47.870753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.777 [2024-05-15 19:46:47.870758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.777 [2024-05-15 19:46:47.870762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.777 [2024-05-15 19:46:47.870773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.777 qpair failed and we were unable to recover it. 00:31:21.777 [2024-05-15 19:46:47.880632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.777 [2024-05-15 19:46:47.880729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.777 [2024-05-15 19:46:47.880741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.777 [2024-05-15 19:46:47.880746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.777 [2024-05-15 19:46:47.880751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.777 [2024-05-15 19:46:47.880761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.778 qpair failed and we were unable to recover it. 00:31:21.778 [2024-05-15 19:46:47.890649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.778 [2024-05-15 19:46:47.890712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.778 [2024-05-15 19:46:47.890724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.778 [2024-05-15 19:46:47.890729] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.778 [2024-05-15 19:46:47.890733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.778 [2024-05-15 19:46:47.890744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.778 qpair failed and we were unable to recover it. 00:31:21.778 [2024-05-15 19:46:47.900699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.778 [2024-05-15 19:46:47.900756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.778 [2024-05-15 19:46:47.900768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.778 [2024-05-15 19:46:47.900773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.778 [2024-05-15 19:46:47.900777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.778 [2024-05-15 19:46:47.900788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.778 qpair failed and we were unable to recover it. 00:31:21.778 [2024-05-15 19:46:47.910731] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.778 [2024-05-15 19:46:47.910792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.778 [2024-05-15 19:46:47.910804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.778 [2024-05-15 19:46:47.910809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.778 [2024-05-15 19:46:47.910814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.778 [2024-05-15 19:46:47.910824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.778 qpair failed and we were unable to recover it. 00:31:21.778 [2024-05-15 19:46:47.920753] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.778 [2024-05-15 19:46:47.920818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.778 [2024-05-15 19:46:47.920830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.778 [2024-05-15 19:46:47.920835] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.778 [2024-05-15 19:46:47.920840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.778 [2024-05-15 19:46:47.920850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.778 qpair failed and we were unable to recover it. 00:31:21.778 [2024-05-15 19:46:47.930783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.778 [2024-05-15 19:46:47.930844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.778 [2024-05-15 19:46:47.930857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.778 [2024-05-15 19:46:47.930862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.778 [2024-05-15 19:46:47.930869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.778 [2024-05-15 19:46:47.930880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.778 qpair failed and we were unable to recover it. 00:31:21.778 [2024-05-15 19:46:47.940788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.778 [2024-05-15 19:46:47.940852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.778 [2024-05-15 19:46:47.940864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.778 [2024-05-15 19:46:47.940869] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.778 [2024-05-15 19:46:47.940873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.778 [2024-05-15 19:46:47.940884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.778 qpair failed and we were unable to recover it. 00:31:21.778 [2024-05-15 19:46:47.950830] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:21.778 [2024-05-15 19:46:47.950893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:21.778 [2024-05-15 19:46:47.950905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:21.778 [2024-05-15 19:46:47.950910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:21.778 [2024-05-15 19:46:47.950915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:21.778 [2024-05-15 19:46:47.950926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.778 qpair failed and we were unable to recover it. 00:31:22.041 [2024-05-15 19:46:47.960922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.041 [2024-05-15 19:46:47.960988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.041 [2024-05-15 19:46:47.961001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.041 [2024-05-15 19:46:47.961006] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.041 [2024-05-15 19:46:47.961011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:22.041 [2024-05-15 19:46:47.961021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:22.041 qpair failed and we were unable to recover it. 00:31:22.041 [2024-05-15 19:46:47.970845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.041 [2024-05-15 19:46:47.970904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.041 [2024-05-15 19:46:47.970916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.041 [2024-05-15 19:46:47.970921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.041 [2024-05-15 19:46:47.970926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:22.041 [2024-05-15 19:46:47.970936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:22.041 qpair failed and we were unable to recover it. 00:31:22.041 [2024-05-15 19:46:47.980930] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.041 [2024-05-15 19:46:47.980987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.041 [2024-05-15 19:46:47.981000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.041 [2024-05-15 19:46:47.981005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.041 [2024-05-15 19:46:47.981009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:22.041 [2024-05-15 19:46:47.981020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:22.042 qpair failed and we were unable to recover it. 00:31:22.042 [2024-05-15 19:46:47.990954] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.042 [2024-05-15 19:46:47.991018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.042 [2024-05-15 19:46:47.991037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.042 [2024-05-15 19:46:47.991043] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.042 [2024-05-15 19:46:47.991048] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:22.042 [2024-05-15 19:46:47.991062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:22.042 qpair failed and we were unable to recover it. 00:31:22.042 [2024-05-15 19:46:48.000969] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.042 [2024-05-15 19:46:48.001038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.042 [2024-05-15 19:46:48.001057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.042 [2024-05-15 19:46:48.001063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.042 [2024-05-15 19:46:48.001068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:22.042 [2024-05-15 19:46:48.001082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:22.042 qpair failed and we were unable to recover it. 00:31:22.042 [2024-05-15 19:46:48.011000] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.042 [2024-05-15 19:46:48.011068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.042 [2024-05-15 19:46:48.011088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.042 [2024-05-15 19:46:48.011094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.042 [2024-05-15 19:46:48.011099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:22.042 [2024-05-15 19:46:48.011114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:22.042 qpair failed and we were unable to recover it. 00:31:22.042 [2024-05-15 19:46:48.020904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.042 [2024-05-15 19:46:48.020968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.042 [2024-05-15 19:46:48.020987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.042 [2024-05-15 19:46:48.020998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.042 [2024-05-15 19:46:48.021003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:22.042 [2024-05-15 19:46:48.021017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:22.042 qpair failed and we were unable to recover it. 00:31:22.042 [2024-05-15 19:46:48.031048] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.042 [2024-05-15 19:46:48.031116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.042 [2024-05-15 19:46:48.031135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.042 [2024-05-15 19:46:48.031142] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.042 [2024-05-15 19:46:48.031147] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:22.042 [2024-05-15 19:46:48.031161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:22.042 qpair failed and we were unable to recover it. 00:31:22.042 [2024-05-15 19:46:48.041107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.042 [2024-05-15 19:46:48.041173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.042 [2024-05-15 19:46:48.041187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.042 [2024-05-15 19:46:48.041193] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.042 [2024-05-15 19:46:48.041197] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:22.042 [2024-05-15 19:46:48.041208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:22.042 qpair failed and we were unable to recover it. 00:31:22.042 [2024-05-15 19:46:48.051113] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.042 [2024-05-15 19:46:48.051172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.042 [2024-05-15 19:46:48.051184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.042 [2024-05-15 19:46:48.051189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.042 [2024-05-15 19:46:48.051194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:22.042 [2024-05-15 19:46:48.051205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:22.042 qpair failed and we were unable to recover it. 00:31:22.042 [2024-05-15 19:46:48.061129] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.042 [2024-05-15 19:46:48.061187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.042 [2024-05-15 19:46:48.061199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.042 [2024-05-15 19:46:48.061204] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.042 [2024-05-15 19:46:48.061209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:22.042 [2024-05-15 19:46:48.061220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:22.042 qpair failed and we were unable to recover it. 00:31:22.042 [2024-05-15 19:46:48.071209] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.042 [2024-05-15 19:46:48.071278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.042 [2024-05-15 19:46:48.071290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.042 [2024-05-15 19:46:48.071295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.042 [2024-05-15 19:46:48.071300] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:22.042 [2024-05-15 19:46:48.071310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:22.042 qpair failed and we were unable to recover it. 00:31:22.042 [2024-05-15 19:46:48.081182] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.042 [2024-05-15 19:46:48.081245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.042 [2024-05-15 19:46:48.081257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.042 [2024-05-15 19:46:48.081262] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.042 [2024-05-15 19:46:48.081266] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:22.042 [2024-05-15 19:46:48.081277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:22.042 qpair failed and we were unable to recover it. 00:31:22.042 [2024-05-15 19:46:48.091227] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.042 [2024-05-15 19:46:48.091284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.042 [2024-05-15 19:46:48.091297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.042 [2024-05-15 19:46:48.091302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.042 [2024-05-15 19:46:48.091306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:22.042 [2024-05-15 19:46:48.091321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:22.042 qpair failed and we were unable to recover it. 00:31:22.042 [2024-05-15 19:46:48.101236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.042 [2024-05-15 19:46:48.101294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.042 [2024-05-15 19:46:48.101307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.042 [2024-05-15 19:46:48.101315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.042 [2024-05-15 19:46:48.101320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9490000b90 00:31:22.042 [2024-05-15 19:46:48.101331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:22.042 qpair failed and we were unable to recover it. 00:31:22.042 [2024-05-15 19:46:48.111380] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.043 [2024-05-15 19:46:48.111552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.043 [2024-05-15 19:46:48.111616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.043 [2024-05-15 19:46:48.111652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.043 [2024-05-15 19:46:48.111673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9488000b90 00:31:22.043 [2024-05-15 19:46:48.111727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.043 qpair failed and we were unable to recover it. 00:31:22.043 [2024-05-15 19:46:48.121355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.043 [2024-05-15 19:46:48.121455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.043 [2024-05-15 19:46:48.121489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.043 [2024-05-15 19:46:48.121504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.043 [2024-05-15 19:46:48.121518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9488000b90 00:31:22.043 [2024-05-15 19:46:48.121550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.043 qpair failed and we were unable to recover it. 00:31:22.043 [2024-05-15 19:46:48.121941] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248e0f0 is same with the state(5) to be set 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 [2024-05-15 19:46:48.122337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Write completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 Read completed with error (sct=0, sc=8) 00:31:22.043 starting I/O failed 00:31:22.043 [2024-05-15 19:46:48.123099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:22.043 [2024-05-15 19:46:48.131357] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.043 [2024-05-15 19:46:48.131460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.043 [2024-05-15 19:46:48.131481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.043 [2024-05-15 19:46:48.131490] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.043 [2024-05-15 19:46:48.131497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9498000b90 00:31:22.043 [2024-05-15 19:46:48.131515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:22.043 qpair failed and we were unable to recover it. 00:31:22.043 [2024-05-15 19:46:48.141359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.043 [2024-05-15 19:46:48.141434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.043 [2024-05-15 19:46:48.141454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.043 [2024-05-15 19:46:48.141463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.043 [2024-05-15 19:46:48.141469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9498000b90 00:31:22.043 [2024-05-15 19:46:48.141485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:22.043 qpair failed and we were unable to recover it. 00:31:22.043 [2024-05-15 19:46:48.151438] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.043 [2024-05-15 19:46:48.151626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.043 [2024-05-15 19:46:48.151692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.043 [2024-05-15 19:46:48.151716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.043 [2024-05-15 19:46:48.151749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2480520 00:31:22.044 [2024-05-15 19:46:48.151801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:22.044 qpair failed and we were unable to recover it. 00:31:22.044 [2024-05-15 19:46:48.161462] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.044 [2024-05-15 19:46:48.161627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.044 [2024-05-15 19:46:48.161661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.044 [2024-05-15 19:46:48.161676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.044 [2024-05-15 19:46:48.161690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2480520 00:31:22.044 [2024-05-15 19:46:48.161719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:22.044 qpair failed and we were unable to recover it. 00:31:22.044 [2024-05-15 19:46:48.162135] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248e0f0 (9): Bad file descriptor 00:31:22.044 Initializing NVMe Controllers 00:31:22.044 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:22.044 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:22.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:22.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:22.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:22.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:22.044 Initialization complete. Launching workers. 00:31:22.044 Starting thread on core 1 00:31:22.044 Starting thread on core 2 00:31:22.044 Starting thread on core 3 00:31:22.044 Starting thread on core 0 00:31:22.044 19:46:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:31:22.044 00:31:22.044 real 0m11.400s 00:31:22.044 user 0m20.860s 00:31:22.044 sys 0m3.992s 00:31:22.044 19:46:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:22.044 19:46:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:22.044 ************************************ 00:31:22.044 END TEST nvmf_target_disconnect_tc2 00:31:22.044 ************************************ 00:31:22.044 19:46:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:31:22.044 19:46:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:31:22.044 19:46:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:31:22.044 19:46:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:22.044 19:46:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:31:22.305 19:46:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:22.305 19:46:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:31:22.305 19:46:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:22.305 19:46:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:22.305 rmmod nvme_tcp 00:31:22.305 rmmod nvme_fabrics 00:31:22.305 rmmod nvme_keyring 00:31:22.305 19:46:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:22.305 19:46:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:31:22.305 19:46:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:31:22.305 19:46:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3804211 ']' 00:31:22.305 19:46:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3804211 00:31:22.305 19:46:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3804211 ']' 00:31:22.305 19:46:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 3804211 00:31:22.305 19:46:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:31:22.305 19:46:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:22.305 19:46:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3804211 00:31:22.305 19:46:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:31:22.305 19:46:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:31:22.305 19:46:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3804211' 00:31:22.305 killing process with pid 3804211 00:31:22.305 19:46:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 3804211 00:31:22.305 [2024-05-15 19:46:48.344099] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:22.305 19:46:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 3804211 00:31:22.567 19:46:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:22.567 19:46:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:22.567 19:46:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:22.567 19:46:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:22.567 19:46:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:22.567 19:46:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.567 19:46:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:22.567 19:46:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.480 19:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:24.480 00:31:24.480 real 0m22.638s 00:31:24.480 user 0m48.598s 00:31:24.480 sys 0m10.783s 00:31:24.480 19:46:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:24.480 19:46:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:24.480 ************************************ 00:31:24.480 END TEST nvmf_target_disconnect 00:31:24.480 ************************************ 00:31:24.480 19:46:50 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:31:24.480 19:46:50 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:24.480 19:46:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:24.480 19:46:50 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:31:24.480 00:31:24.480 real 24m0.604s 00:31:24.480 user 49m34.392s 00:31:24.480 sys 7m57.444s 00:31:24.480 19:46:50 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:24.480 19:46:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:24.480 ************************************ 00:31:24.480 END TEST nvmf_tcp 00:31:24.480 ************************************ 00:31:24.742 19:46:50 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:31:24.742 19:46:50 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:24.742 19:46:50 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:24.742 19:46:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:24.742 19:46:50 -- common/autotest_common.sh@10 -- # set +x 00:31:24.742 ************************************ 00:31:24.742 START TEST spdkcli_nvmf_tcp 00:31:24.742 ************************************ 00:31:24.742 19:46:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:24.742 * Looking for test storage... 00:31:24.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:24.742 19:46:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:24.742 19:46:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:24.742 19:46:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:24.742 19:46:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:24.742 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3806085 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3806085 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 3806085 ']' 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:24.743 19:46:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:25.004 [2024-05-15 19:46:50.946134] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:31:25.004 [2024-05-15 19:46:50.946193] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3806085 ] 00:31:25.004 EAL: No free 2048 kB hugepages reported on node 1 00:31:25.004 [2024-05-15 19:46:51.035341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:25.004 [2024-05-15 19:46:51.133185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:25.004 [2024-05-15 19:46:51.133193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.948 19:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:25.948 19:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:31:25.948 19:46:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:25.948 19:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:25.948 19:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:25.948 19:46:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:25.948 19:46:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:25.948 19:46:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:25.948 19:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:25.948 19:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:25.948 19:46:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:25.948 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:25.948 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:25.948 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:25.948 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:25.948 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:25.948 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:25.948 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:25.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:25.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:25.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:25.948 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:25.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:25.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:25.948 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:25.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:25.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:25.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:25.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:25.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:25.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:25.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:25.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:25.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:25.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:25.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:25.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:25.948 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:25.948 ' 00:31:28.504 [2024-05-15 19:46:54.231091] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:29.460 [2024-05-15 19:46:55.394467] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:29.460 [2024-05-15 19:46:55.395015] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:31.434 [2024-05-15 19:46:57.533222] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:33.347 [2024-05-15 19:46:59.366746] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:34.730 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:34.730 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:34.730 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:34.730 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:34.730 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:34.730 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:34.730 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:34.730 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:34.730 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:34.730 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:34.730 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:34.730 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:34.730 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:34.730 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:34.730 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:34.730 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:34.730 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:34.730 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:34.730 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:34.730 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:34.730 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:34.730 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:34.730 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:34.730 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:34.730 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:34.730 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:34.730 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:34.730 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:34.730 19:47:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:34.730 19:47:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:34.731 19:47:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:34.991 19:47:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:34.991 19:47:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:34.991 19:47:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:34.991 19:47:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:34.991 19:47:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:35.252 19:47:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:35.252 19:47:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:35.252 19:47:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:35.252 19:47:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:35.252 19:47:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:35.252 19:47:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:35.252 19:47:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:35.252 19:47:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:35.252 19:47:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:35.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:35.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:35.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:35.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:35.252 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:35.252 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:35.252 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:35.252 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:35.252 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:35.252 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:35.252 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:35.252 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:35.252 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:35.253 ' 00:31:40.540 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:40.540 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:40.540 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:40.540 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:40.540 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:40.540 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:40.540 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:40.540 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:40.540 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:40.540 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:40.540 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:40.540 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:40.540 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:40.540 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3806085 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3806085 ']' 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3806085 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3806085 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3806085' 00:31:40.540 killing process with pid 3806085 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 3806085 00:31:40.540 [2024-05-15 19:47:06.427933] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 3806085 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3806085 ']' 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3806085 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3806085 ']' 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3806085 00:31:40.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3806085) - No such process 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 3806085 is not found' 00:31:40.540 Process with pid 3806085 is not found 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:40.540 00:31:40.540 real 0m15.816s 00:31:40.540 user 0m32.650s 00:31:40.540 sys 0m0.798s 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:40.540 19:47:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:40.540 ************************************ 00:31:40.540 END TEST spdkcli_nvmf_tcp 00:31:40.540 ************************************ 00:31:40.540 19:47:06 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:40.540 19:47:06 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:40.540 19:47:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:40.540 19:47:06 -- common/autotest_common.sh@10 -- # set +x 00:31:40.540 ************************************ 00:31:40.540 START TEST nvmf_identify_passthru 00:31:40.540 ************************************ 00:31:40.540 19:47:06 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:40.802 * Looking for test storage... 00:31:40.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:40.802 19:47:06 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.802 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:40.802 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.802 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.802 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.802 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.802 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.802 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.802 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.802 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.802 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.802 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.802 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:40.802 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:40.802 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.802 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.802 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.802 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.802 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.802 19:47:06 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.802 19:47:06 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.802 19:47:06 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.802 19:47:06 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.802 19:47:06 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.803 19:47:06 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.803 19:47:06 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:40.803 19:47:06 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.803 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:31:40.803 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:40.803 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:40.803 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.803 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.803 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.803 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:40.803 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:40.803 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:40.803 19:47:06 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.803 19:47:06 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.803 19:47:06 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.803 19:47:06 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.803 19:47:06 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.803 19:47:06 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.803 19:47:06 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.803 19:47:06 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:40.803 19:47:06 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.803 19:47:06 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:40.803 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:40.803 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.803 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:40.803 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:40.803 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:40.803 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.803 19:47:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:40.803 19:47:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.803 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:40.803 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:40.803 19:47:06 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:31:40.803 19:47:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:48.946 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:48.946 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:48.946 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:48.947 Found net devices under 0000:31:00.0: cvl_0_0 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:48.947 Found net devices under 0000:31:00.1: cvl_0_1 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:48.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:48.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:31:48.947 00:31:48.947 --- 10.0.0.2 ping statistics --- 00:31:48.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.947 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:48.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:48.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:31:48.947 00:31:48.947 --- 10.0.0.1 ping statistics --- 00:31:48.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.947 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:48.947 19:47:14 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:48.947 19:47:14 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:48.947 19:47:14 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:48.947 19:47:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:48.947 19:47:14 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:48.947 19:47:14 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:31:48.947 19:47:14 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:31:48.947 19:47:14 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:31:48.947 19:47:14 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:31:48.947 19:47:14 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:48.947 19:47:14 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:31:48.947 19:47:14 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:48.947 19:47:14 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:48.947 19:47:14 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:31:48.947 19:47:14 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:31:48.947 19:47:14 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:31:48.947 19:47:14 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:65:00.0 00:31:48.947 19:47:14 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:31:48.947 19:47:14 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:31:48.947 19:47:14 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:31:48.947 19:47:14 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:48.947 19:47:14 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:48.947 EAL: No free 2048 kB hugepages reported on node 1 00:31:49.208 19:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:31:49.208 19:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:31:49.208 19:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:49.208 19:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:49.208 EAL: No free 2048 kB hugepages reported on node 1 00:31:49.780 19:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:31:49.780 19:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:49.780 19:47:15 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:49.780 19:47:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:49.780 19:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:49.780 19:47:15 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:49.780 19:47:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:49.780 19:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3813420 00:31:49.780 19:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:49.780 19:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:49.780 19:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3813420 00:31:49.780 19:47:15 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 3813420 ']' 00:31:49.780 19:47:15 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.780 19:47:15 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:49.780 19:47:15 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.780 19:47:15 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:49.780 19:47:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:49.780 [2024-05-15 19:47:15.827337] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:31:49.780 [2024-05-15 19:47:15.827412] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:49.780 EAL: No free 2048 kB hugepages reported on node 1 00:31:49.780 [2024-05-15 19:47:15.923088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:50.041 [2024-05-15 19:47:16.019599] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:50.041 [2024-05-15 19:47:16.019662] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:50.041 [2024-05-15 19:47:16.019671] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:50.041 [2024-05-15 19:47:16.019678] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:50.041 [2024-05-15 19:47:16.019684] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:50.041 [2024-05-15 19:47:16.019815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:50.041 [2024-05-15 19:47:16.019948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:50.041 [2024-05-15 19:47:16.020114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.041 [2024-05-15 19:47:16.020114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:50.612 19:47:16 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:50.612 19:47:16 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:31:50.612 19:47:16 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:50.612 19:47:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.612 19:47:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:50.612 INFO: Log level set to 20 00:31:50.612 INFO: Requests: 00:31:50.612 { 00:31:50.612 "jsonrpc": "2.0", 00:31:50.612 "method": "nvmf_set_config", 00:31:50.612 "id": 1, 00:31:50.612 "params": { 00:31:50.612 "admin_cmd_passthru": { 00:31:50.612 "identify_ctrlr": true 00:31:50.612 } 00:31:50.612 } 00:31:50.612 } 00:31:50.612 00:31:50.612 INFO: response: 00:31:50.612 { 00:31:50.612 "jsonrpc": "2.0", 00:31:50.612 "id": 1, 00:31:50.612 "result": true 00:31:50.612 } 00:31:50.612 00:31:50.612 19:47:16 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.612 19:47:16 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:50.612 19:47:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.612 19:47:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:50.612 INFO: Setting log level to 20 00:31:50.612 INFO: Setting log level to 20 00:31:50.612 INFO: Log level set to 20 00:31:50.612 INFO: Log level set to 20 00:31:50.612 INFO: Requests: 00:31:50.612 { 00:31:50.612 "jsonrpc": "2.0", 00:31:50.612 "method": "framework_start_init", 00:31:50.612 "id": 1 00:31:50.612 } 00:31:50.612 00:31:50.612 INFO: Requests: 00:31:50.612 { 00:31:50.612 "jsonrpc": "2.0", 00:31:50.612 "method": "framework_start_init", 00:31:50.612 "id": 1 00:31:50.612 } 00:31:50.612 00:31:50.612 [2024-05-15 19:47:16.785080] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:50.612 INFO: response: 00:31:50.612 { 00:31:50.612 "jsonrpc": "2.0", 00:31:50.612 "id": 1, 00:31:50.612 "result": true 00:31:50.612 } 00:31:50.612 00:31:50.612 INFO: response: 00:31:50.612 { 00:31:50.612 "jsonrpc": "2.0", 00:31:50.612 "id": 1, 00:31:50.612 "result": true 00:31:50.612 } 00:31:50.612 00:31:50.612 19:47:16 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.612 19:47:16 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:50.612 19:47:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.612 19:47:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:50.612 INFO: Setting log level to 40 00:31:50.612 INFO: Setting log level to 40 00:31:50.612 INFO: Setting log level to 40 00:31:50.873 [2024-05-15 19:47:16.798353] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:50.873 19:47:16 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.873 19:47:16 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:50.873 19:47:16 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:50.873 19:47:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:50.873 19:47:16 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:31:50.873 19:47:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.873 19:47:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:51.134 Nvme0n1 00:31:51.134 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.134 19:47:17 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:51.134 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.134 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:51.134 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.134 19:47:17 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:51.134 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.134 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:51.134 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.134 19:47:17 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:51.134 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.134 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:51.134 [2024-05-15 19:47:17.190299] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:51.134 [2024-05-15 19:47:17.190558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:51.134 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.134 19:47:17 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:51.134 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.134 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:51.134 [ 00:31:51.134 { 00:31:51.134 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:51.134 "subtype": "Discovery", 00:31:51.134 "listen_addresses": [], 00:31:51.134 "allow_any_host": true, 00:31:51.134 "hosts": [] 00:31:51.134 }, 00:31:51.134 { 00:31:51.134 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:51.134 "subtype": "NVMe", 00:31:51.134 "listen_addresses": [ 00:31:51.134 { 00:31:51.134 "trtype": "TCP", 00:31:51.134 "adrfam": "IPv4", 00:31:51.134 "traddr": "10.0.0.2", 00:31:51.134 "trsvcid": "4420" 00:31:51.134 } 00:31:51.134 ], 00:31:51.134 "allow_any_host": true, 00:31:51.134 "hosts": [], 00:31:51.134 "serial_number": "SPDK00000000000001", 00:31:51.134 "model_number": "SPDK bdev Controller", 00:31:51.134 "max_namespaces": 1, 00:31:51.134 "min_cntlid": 1, 00:31:51.134 "max_cntlid": 65519, 00:31:51.134 "namespaces": [ 00:31:51.134 { 00:31:51.134 "nsid": 1, 00:31:51.134 "bdev_name": "Nvme0n1", 00:31:51.134 "name": "Nvme0n1", 00:31:51.134 "nguid": "36344730526054940025384500000023", 00:31:51.134 "uuid": "36344730-5260-5494-0025-384500000023" 00:31:51.134 } 00:31:51.134 ] 00:31:51.134 } 00:31:51.134 ] 00:31:51.134 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.134 19:47:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:51.134 19:47:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:51.134 19:47:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:51.134 EAL: No free 2048 kB hugepages reported on node 1 00:31:51.396 19:47:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:31:51.396 19:47:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:51.396 19:47:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:51.396 19:47:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:51.396 EAL: No free 2048 kB hugepages reported on node 1 00:31:51.396 19:47:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:31:51.396 19:47:17 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:31:51.396 19:47:17 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:31:51.396 19:47:17 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:51.396 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.396 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:51.396 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.396 19:47:17 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:51.396 19:47:17 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:51.396 19:47:17 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:51.396 19:47:17 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:31:51.396 19:47:17 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:51.396 19:47:17 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:31:51.396 19:47:17 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:51.396 19:47:17 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:51.396 rmmod nvme_tcp 00:31:51.396 rmmod nvme_fabrics 00:31:51.396 rmmod nvme_keyring 00:31:51.657 19:47:17 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:51.657 19:47:17 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:31:51.657 19:47:17 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:31:51.657 19:47:17 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3813420 ']' 00:31:51.657 19:47:17 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3813420 00:31:51.657 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 3813420 ']' 00:31:51.657 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 3813420 00:31:51.657 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:31:51.657 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:51.657 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3813420 00:31:51.657 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:51.657 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:51.657 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3813420' 00:31:51.657 killing process with pid 3813420 00:31:51.657 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 3813420 00:31:51.657 [2024-05-15 19:47:17.662254] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:51.657 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 3813420 00:31:51.918 19:47:17 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:51.918 19:47:17 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:51.918 19:47:17 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:51.918 19:47:17 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:51.918 19:47:17 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:51.918 19:47:17 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.918 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:51.918 19:47:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.833 19:47:19 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:53.833 00:31:53.833 real 0m13.353s 00:31:53.833 user 0m10.261s 00:31:53.833 sys 0m6.573s 00:31:53.833 19:47:19 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:53.833 19:47:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:53.833 ************************************ 00:31:53.833 END TEST nvmf_identify_passthru 00:31:53.833 ************************************ 00:31:54.093 19:47:20 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:54.093 19:47:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:54.093 19:47:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:54.093 19:47:20 -- common/autotest_common.sh@10 -- # set +x 00:31:54.093 ************************************ 00:31:54.093 START TEST nvmf_dif 00:31:54.093 ************************************ 00:31:54.093 19:47:20 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:54.093 * Looking for test storage... 00:31:54.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:54.094 19:47:20 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.094 19:47:20 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.094 19:47:20 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.094 19:47:20 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.094 19:47:20 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.094 19:47:20 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.094 19:47:20 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.094 19:47:20 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:54.094 19:47:20 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:54.094 19:47:20 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:54.094 19:47:20 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:54.094 19:47:20 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:54.094 19:47:20 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:54.094 19:47:20 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.094 19:47:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:54.094 19:47:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:54.094 19:47:20 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:31:54.094 19:47:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:02.233 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:02.233 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:02.233 Found net devices under 0000:31:00.0: cvl_0_0 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:02.233 Found net devices under 0000:31:00.1: cvl_0_1 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:02.233 19:47:27 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:02.233 19:47:28 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:02.233 19:47:28 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:02.233 19:47:28 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:02.233 19:47:28 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:02.233 19:47:28 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:02.233 19:47:28 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:02.233 19:47:28 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:02.233 19:47:28 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:02.233 19:47:28 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:02.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:02.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:32:02.233 00:32:02.233 --- 10.0.0.2 ping statistics --- 00:32:02.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.233 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:32:02.233 19:47:28 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:02.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:02.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:32:02.233 00:32:02.233 --- 10.0.0.1 ping statistics --- 00:32:02.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.233 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:32:02.233 19:47:28 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:02.233 19:47:28 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:32:02.233 19:47:28 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:02.233 19:47:28 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:06.444 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:06.444 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:06.444 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:06.444 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:06.444 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:06.444 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:06.444 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:06.444 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:06.444 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:06.444 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:32:06.444 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:06.444 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:06.444 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:06.444 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:06.444 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:06.444 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:06.444 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:06.444 19:47:32 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:06.444 19:47:32 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:06.444 19:47:32 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:06.444 19:47:32 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:06.444 19:47:32 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:06.444 19:47:32 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:06.444 19:47:32 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:06.444 19:47:32 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:32:06.444 19:47:32 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:06.444 19:47:32 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:06.444 19:47:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:06.444 19:47:32 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3820185 00:32:06.444 19:47:32 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3820185 00:32:06.444 19:47:32 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:06.444 19:47:32 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 3820185 ']' 00:32:06.444 19:47:32 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.444 19:47:32 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:06.444 19:47:32 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.444 19:47:32 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:06.444 19:47:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:06.444 [2024-05-15 19:47:32.405780] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:32:06.444 [2024-05-15 19:47:32.405830] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.444 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.444 [2024-05-15 19:47:32.494987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.444 [2024-05-15 19:47:32.575116] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.444 [2024-05-15 19:47:32.575167] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.444 [2024-05-15 19:47:32.575175] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:06.444 [2024-05-15 19:47:32.575182] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:06.444 [2024-05-15 19:47:32.575188] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.444 [2024-05-15 19:47:32.575212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.386 19:47:33 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:07.386 19:47:33 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:32:07.386 19:47:33 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:07.386 19:47:33 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:07.386 19:47:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:07.386 19:47:33 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:07.386 19:47:33 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:32:07.386 19:47:33 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:07.386 19:47:33 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.386 19:47:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:07.386 [2024-05-15 19:47:33.339672] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:07.386 19:47:33 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.386 19:47:33 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:07.386 19:47:33 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:07.386 19:47:33 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:07.386 19:47:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:07.386 ************************************ 00:32:07.386 START TEST fio_dif_1_default 00:32:07.386 ************************************ 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:07.386 bdev_null0 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:07.386 [2024-05-15 19:47:33.435857] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:32:07.386 [2024-05-15 19:47:33.436143] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:07.386 { 00:32:07.386 "params": { 00:32:07.386 "name": "Nvme$subsystem", 00:32:07.386 "trtype": "$TEST_TRANSPORT", 00:32:07.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.386 "adrfam": "ipv4", 00:32:07.386 "trsvcid": "$NVMF_PORT", 00:32:07.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.386 "hdgst": ${hdgst:-false}, 00:32:07.386 "ddgst": ${ddgst:-false} 00:32:07.386 }, 00:32:07.386 "method": "bdev_nvme_attach_controller" 00:32:07.386 } 00:32:07.386 EOF 00:32:07.386 )") 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:07.386 "params": { 00:32:07.386 "name": "Nvme0", 00:32:07.386 "trtype": "tcp", 00:32:07.386 "traddr": "10.0.0.2", 00:32:07.386 "adrfam": "ipv4", 00:32:07.386 "trsvcid": "4420", 00:32:07.386 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:07.386 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:07.386 "hdgst": false, 00:32:07.386 "ddgst": false 00:32:07.386 }, 00:32:07.386 "method": "bdev_nvme_attach_controller" 00:32:07.386 }' 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:07.386 19:47:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:07.956 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:07.956 fio-3.35 00:32:07.956 Starting 1 thread 00:32:07.956 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.188 00:32:20.188 filename0: (groupid=0, jobs=1): err= 0: pid=3820715: Wed May 15 19:47:44 2024 00:32:20.188 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10042msec) 00:32:20.188 slat (nsec): min=8261, max=32857, avg=8481.42, stdev=1209.68 00:32:20.188 clat (usec): min=41883, max=43156, avg=41990.47, stdev=121.62 00:32:20.188 lat (usec): min=41891, max=43189, avg=41998.95, stdev=121.98 00:32:20.188 clat percentiles (usec): 00:32:20.188 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:32:20.188 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:32:20.188 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:20.188 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:32:20.188 | 99.99th=[43254] 00:32:20.188 bw ( KiB/s): min= 352, max= 384, per=99.79%, avg=380.80, stdev= 9.85, samples=20 00:32:20.188 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:32:20.188 lat (msec) : 50=100.00% 00:32:20.189 cpu : usr=95.37%, sys=4.37%, ctx=14, majf=0, minf=216 00:32:20.189 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:20.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.189 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:20.189 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:20.189 00:32:20.189 Run status group 0 (all jobs): 00:32:20.189 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3824KiB (3916kB), run=10042-10042msec 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.189 00:32:20.189 real 0m11.151s 00:32:20.189 user 0m18.971s 00:32:20.189 sys 0m0.804s 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:20.189 ************************************ 00:32:20.189 END TEST fio_dif_1_default 00:32:20.189 ************************************ 00:32:20.189 19:47:44 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:20.189 19:47:44 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:20.189 19:47:44 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:20.189 19:47:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:20.189 ************************************ 00:32:20.189 START TEST fio_dif_1_multi_subsystems 00:32:20.189 ************************************ 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:20.189 bdev_null0 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:20.189 [2024-05-15 19:47:44.668452] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:20.189 bdev_null1 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:20.189 { 00:32:20.189 "params": { 00:32:20.189 "name": "Nvme$subsystem", 00:32:20.189 "trtype": "$TEST_TRANSPORT", 00:32:20.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:20.189 "adrfam": "ipv4", 00:32:20.189 "trsvcid": "$NVMF_PORT", 00:32:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:20.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:20.189 "hdgst": ${hdgst:-false}, 00:32:20.189 "ddgst": ${ddgst:-false} 00:32:20.189 }, 00:32:20.189 "method": "bdev_nvme_attach_controller" 00:32:20.189 } 00:32:20.189 EOF 00:32:20.189 )") 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:20.189 { 00:32:20.189 "params": { 00:32:20.189 "name": "Nvme$subsystem", 00:32:20.189 "trtype": "$TEST_TRANSPORT", 00:32:20.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:20.189 "adrfam": "ipv4", 00:32:20.189 "trsvcid": "$NVMF_PORT", 00:32:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:20.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:20.189 "hdgst": ${hdgst:-false}, 00:32:20.189 "ddgst": ${ddgst:-false} 00:32:20.189 }, 00:32:20.189 "method": "bdev_nvme_attach_controller" 00:32:20.189 } 00:32:20.189 EOF 00:32:20.189 )") 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:32:20.189 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:20.190 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:32:20.190 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:32:20.190 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:32:20.190 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:20.190 "params": { 00:32:20.190 "name": "Nvme0", 00:32:20.190 "trtype": "tcp", 00:32:20.190 "traddr": "10.0.0.2", 00:32:20.190 "adrfam": "ipv4", 00:32:20.190 "trsvcid": "4420", 00:32:20.190 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:20.190 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:20.190 "hdgst": false, 00:32:20.190 "ddgst": false 00:32:20.190 }, 00:32:20.190 "method": "bdev_nvme_attach_controller" 00:32:20.190 },{ 00:32:20.190 "params": { 00:32:20.190 "name": "Nvme1", 00:32:20.190 "trtype": "tcp", 00:32:20.190 "traddr": "10.0.0.2", 00:32:20.190 "adrfam": "ipv4", 00:32:20.190 "trsvcid": "4420", 00:32:20.190 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:20.190 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:20.190 "hdgst": false, 00:32:20.190 "ddgst": false 00:32:20.190 }, 00:32:20.190 "method": "bdev_nvme_attach_controller" 00:32:20.190 }' 00:32:20.190 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:20.190 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:20.190 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:20.190 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:20.190 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:32:20.190 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:20.190 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:20.190 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:20.190 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:20.190 19:47:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:20.190 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:20.190 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:20.190 fio-3.35 00:32:20.190 Starting 2 threads 00:32:20.190 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.200 00:32:30.200 filename0: (groupid=0, jobs=1): err= 0: pid=3822936: Wed May 15 19:47:56 2024 00:32:30.200 read: IOPS=185, BW=743KiB/s (761kB/s)(7440KiB/10016msec) 00:32:30.200 slat (nsec): min=8251, max=31290, avg=8628.83, stdev=1063.45 00:32:30.200 clat (usec): min=875, max=42825, avg=21515.57, stdev=20289.40 00:32:30.200 lat (usec): min=883, max=42856, avg=21524.20, stdev=20289.26 00:32:30.200 clat percentiles (usec): 00:32:30.200 | 1.00th=[ 938], 5.00th=[ 1106], 10.00th=[ 1156], 20.00th=[ 1188], 00:32:30.200 | 30.00th=[ 1205], 40.00th=[ 1221], 50.00th=[41157], 60.00th=[41681], 00:32:30.200 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:32:30.200 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:32:30.200 | 99.99th=[42730] 00:32:30.200 bw ( KiB/s): min= 704, max= 768, per=66.12%, avg=742.40, stdev=32.17, samples=20 00:32:30.200 iops : min= 176, max= 192, avg=185.60, stdev= 8.04, samples=20 00:32:30.200 lat (usec) : 1000=2.20% 00:32:30.200 lat (msec) : 2=47.69%, 50=50.11% 00:32:30.200 cpu : usr=96.15%, sys=3.61%, ctx=14, majf=0, minf=76 00:32:30.200 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:30.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.200 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:30.200 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:30.200 filename1: (groupid=0, jobs=1): err= 0: pid=3822937: Wed May 15 19:47:56 2024 00:32:30.200 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10038msec) 00:32:30.200 slat (nsec): min=8250, max=60255, avg=8866.26, stdev=2266.79 00:32:30.200 clat (usec): min=40914, max=43030, avg=41973.78, stdev=161.67 00:32:30.200 lat (usec): min=40922, max=43042, avg=41982.65, stdev=161.75 00:32:30.200 clat percentiles (usec): 00:32:30.200 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:32:30.200 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:32:30.200 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:30.200 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:32:30.200 | 99.99th=[43254] 00:32:30.200 bw ( KiB/s): min= 352, max= 384, per=33.86%, avg=380.80, stdev= 9.85, samples=20 00:32:30.200 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:32:30.200 lat (msec) : 50=100.00% 00:32:30.200 cpu : usr=96.59%, sys=3.16%, ctx=15, majf=0, minf=204 00:32:30.200 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:30.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.200 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:30.200 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:30.200 00:32:30.200 Run status group 0 (all jobs): 00:32:30.200 READ: bw=1122KiB/s (1149kB/s), 381KiB/s-743KiB/s (390kB/s-761kB/s), io=11.0MiB (11.5MB), run=10016-10038msec 00:32:30.200 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:30.200 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:32:30.200 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.201 00:32:30.201 real 0m11.563s 00:32:30.201 user 0m35.556s 00:32:30.201 sys 0m0.998s 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:30.201 19:47:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:30.201 ************************************ 00:32:30.201 END TEST fio_dif_1_multi_subsystems 00:32:30.201 ************************************ 00:32:30.201 19:47:56 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:30.201 19:47:56 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:30.201 19:47:56 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:30.201 19:47:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:30.201 ************************************ 00:32:30.201 START TEST fio_dif_rand_params 00:32:30.201 ************************************ 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.201 bdev_null0 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.201 [2024-05-15 19:47:56.319830] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:30.201 { 00:32:30.201 "params": { 00:32:30.201 "name": "Nvme$subsystem", 00:32:30.201 "trtype": "$TEST_TRANSPORT", 00:32:30.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:30.201 "adrfam": "ipv4", 00:32:30.201 "trsvcid": "$NVMF_PORT", 00:32:30.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:30.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:30.201 "hdgst": ${hdgst:-false}, 00:32:30.201 "ddgst": ${ddgst:-false} 00:32:30.201 }, 00:32:30.201 "method": "bdev_nvme_attach_controller" 00:32:30.201 } 00:32:30.201 EOF 00:32:30.201 )") 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:30.201 "params": { 00:32:30.201 "name": "Nvme0", 00:32:30.201 "trtype": "tcp", 00:32:30.201 "traddr": "10.0.0.2", 00:32:30.201 "adrfam": "ipv4", 00:32:30.201 "trsvcid": "4420", 00:32:30.201 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:30.201 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:30.201 "hdgst": false, 00:32:30.201 "ddgst": false 00:32:30.201 }, 00:32:30.201 "method": "bdev_nvme_attach_controller" 00:32:30.201 }' 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:32:30.201 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:30.473 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:30.473 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:30.473 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:30.473 19:47:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:30.737 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:30.737 ... 00:32:30.737 fio-3.35 00:32:30.737 Starting 3 threads 00:32:30.737 EAL: No free 2048 kB hugepages reported on node 1 00:32:37.410 00:32:37.410 filename0: (groupid=0, jobs=1): err= 0: pid=3825424: Wed May 15 19:48:02 2024 00:32:37.410 read: IOPS=188, BW=23.5MiB/s (24.7MB/s)(118MiB/5028msec) 00:32:37.410 slat (nsec): min=8322, max=50877, avg=9198.39, stdev=1656.08 00:32:37.410 clat (usec): min=6139, max=94060, avg=15913.06, stdev=13317.95 00:32:37.410 lat (usec): min=6148, max=94069, avg=15922.26, stdev=13318.07 00:32:37.410 clat percentiles (usec): 00:32:37.410 | 1.00th=[ 6980], 5.00th=[ 7767], 10.00th=[ 8291], 20.00th=[ 9241], 00:32:37.410 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[11469], 60.00th=[12387], 00:32:37.410 | 70.00th=[13435], 80.00th=[14484], 90.00th=[49546], 95.00th=[51643], 00:32:37.410 | 99.00th=[54264], 99.50th=[55313], 99.90th=[93848], 99.95th=[93848], 00:32:37.410 | 99.99th=[93848] 00:32:37.410 bw ( KiB/s): min=15616, max=36096, per=31.40%, avg=24166.40, stdev=6439.36, samples=10 00:32:37.410 iops : min= 122, max= 282, avg=188.80, stdev=50.31, samples=10 00:32:37.410 lat (msec) : 10=32.84%, 20=55.54%, 50=2.32%, 100=9.29% 00:32:37.410 cpu : usr=95.58%, sys=4.14%, ctx=9, majf=0, minf=63 00:32:37.410 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:37.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.410 issued rwts: total=947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.410 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:37.410 filename0: (groupid=0, jobs=1): err= 0: pid=3825425: Wed May 15 19:48:02 2024 00:32:37.410 read: IOPS=221, BW=27.6MiB/s (29.0MB/s)(138MiB/5001msec) 00:32:37.410 slat (nsec): min=8304, max=67657, avg=10735.23, stdev=2493.44 00:32:37.410 clat (usec): min=4709, max=93428, avg=13550.18, stdev=13876.08 00:32:37.410 lat (usec): min=4720, max=93438, avg=13560.92, stdev=13876.08 00:32:37.410 clat percentiles (usec): 00:32:37.410 | 1.00th=[ 5145], 5.00th=[ 5604], 10.00th=[ 6390], 20.00th=[ 7242], 00:32:37.410 | 30.00th=[ 7701], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[ 9503], 00:32:37.410 | 70.00th=[10028], 80.00th=[10552], 90.00th=[47973], 95.00th=[50070], 00:32:37.410 | 99.00th=[52691], 99.50th=[53216], 99.90th=[91751], 99.95th=[93848], 00:32:37.410 | 99.99th=[93848] 00:32:37.410 bw ( KiB/s): min=19968, max=36864, per=37.15%, avg=28586.67, stdev=5524.80, samples=9 00:32:37.410 iops : min= 156, max= 288, avg=223.33, stdev=43.16, samples=9 00:32:37.410 lat (msec) : 10=69.80%, 20=18.17%, 50=6.60%, 100=5.42% 00:32:37.410 cpu : usr=96.36%, sys=3.34%, ctx=11, majf=0, minf=185 00:32:37.410 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:37.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.410 issued rwts: total=1106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.410 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:37.410 filename0: (groupid=0, jobs=1): err= 0: pid=3825426: Wed May 15 19:48:02 2024 00:32:37.410 read: IOPS=193, BW=24.1MiB/s (25.3MB/s)(121MiB/5024msec) 00:32:37.410 slat (nsec): min=8301, max=31547, avg=9003.97, stdev=1165.57 00:32:37.410 clat (usec): min=6526, max=94636, avg=15522.30, stdev=13332.06 00:32:37.410 lat (usec): min=6535, max=94644, avg=15531.30, stdev=13332.12 00:32:37.410 clat percentiles (usec): 00:32:37.411 | 1.00th=[ 6849], 5.00th=[ 7701], 10.00th=[ 8094], 20.00th=[ 8848], 00:32:37.411 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[11600], 60.00th=[12518], 00:32:37.411 | 70.00th=[13304], 80.00th=[14353], 90.00th=[49021], 95.00th=[51643], 00:32:37.411 | 99.00th=[54264], 99.50th=[55313], 99.90th=[94897], 99.95th=[94897], 00:32:37.411 | 99.99th=[94897] 00:32:37.411 bw ( KiB/s): min=16896, max=33536, per=32.17%, avg=24755.20, stdev=5334.60, samples=10 00:32:37.411 iops : min= 132, max= 262, avg=193.40, stdev=41.68, samples=10 00:32:37.411 lat (msec) : 10=34.12%, 20=55.46%, 50=2.16%, 100=8.25% 00:32:37.411 cpu : usr=95.94%, sys=3.78%, ctx=8, majf=0, minf=82 00:32:37.411 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:37.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.411 issued rwts: total=970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.411 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:37.411 00:32:37.411 Run status group 0 (all jobs): 00:32:37.411 READ: bw=75.2MiB/s (78.8MB/s), 23.5MiB/s-27.6MiB/s (24.7MB/s-29.0MB/s), io=378MiB (396MB), run=5001-5028msec 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.411 bdev_null0 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.411 [2024-05-15 19:48:02.499961] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.411 bdev_null1 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.411 bdev_null2 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:37.411 { 00:32:37.411 "params": { 00:32:37.411 "name": "Nvme$subsystem", 00:32:37.411 "trtype": "$TEST_TRANSPORT", 00:32:37.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:37.411 "adrfam": "ipv4", 00:32:37.411 "trsvcid": "$NVMF_PORT", 00:32:37.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:37.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:37.411 "hdgst": ${hdgst:-false}, 00:32:37.411 "ddgst": ${ddgst:-false} 00:32:37.411 }, 00:32:37.411 "method": "bdev_nvme_attach_controller" 00:32:37.411 } 00:32:37.411 EOF 00:32:37.411 )") 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:37.411 19:48:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:37.412 { 00:32:37.412 "params": { 00:32:37.412 "name": "Nvme$subsystem", 00:32:37.412 "trtype": "$TEST_TRANSPORT", 00:32:37.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:37.412 "adrfam": "ipv4", 00:32:37.412 "trsvcid": "$NVMF_PORT", 00:32:37.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:37.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:37.412 "hdgst": ${hdgst:-false}, 00:32:37.412 "ddgst": ${ddgst:-false} 00:32:37.412 }, 00:32:37.412 "method": "bdev_nvme_attach_controller" 00:32:37.412 } 00:32:37.412 EOF 00:32:37.412 )") 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:37.412 { 00:32:37.412 "params": { 00:32:37.412 "name": "Nvme$subsystem", 00:32:37.412 "trtype": "$TEST_TRANSPORT", 00:32:37.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:37.412 "adrfam": "ipv4", 00:32:37.412 "trsvcid": "$NVMF_PORT", 00:32:37.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:37.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:37.412 "hdgst": ${hdgst:-false}, 00:32:37.412 "ddgst": ${ddgst:-false} 00:32:37.412 }, 00:32:37.412 "method": "bdev_nvme_attach_controller" 00:32:37.412 } 00:32:37.412 EOF 00:32:37.412 )") 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:37.412 "params": { 00:32:37.412 "name": "Nvme0", 00:32:37.412 "trtype": "tcp", 00:32:37.412 "traddr": "10.0.0.2", 00:32:37.412 "adrfam": "ipv4", 00:32:37.412 "trsvcid": "4420", 00:32:37.412 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:37.412 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:37.412 "hdgst": false, 00:32:37.412 "ddgst": false 00:32:37.412 }, 00:32:37.412 "method": "bdev_nvme_attach_controller" 00:32:37.412 },{ 00:32:37.412 "params": { 00:32:37.412 "name": "Nvme1", 00:32:37.412 "trtype": "tcp", 00:32:37.412 "traddr": "10.0.0.2", 00:32:37.412 "adrfam": "ipv4", 00:32:37.412 "trsvcid": "4420", 00:32:37.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:37.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:37.412 "hdgst": false, 00:32:37.412 "ddgst": false 00:32:37.412 }, 00:32:37.412 "method": "bdev_nvme_attach_controller" 00:32:37.412 },{ 00:32:37.412 "params": { 00:32:37.412 "name": "Nvme2", 00:32:37.412 "trtype": "tcp", 00:32:37.412 "traddr": "10.0.0.2", 00:32:37.412 "adrfam": "ipv4", 00:32:37.412 "trsvcid": "4420", 00:32:37.412 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:37.412 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:37.412 "hdgst": false, 00:32:37.412 "ddgst": false 00:32:37.412 }, 00:32:37.412 "method": "bdev_nvme_attach_controller" 00:32:37.412 }' 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:37.412 19:48:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:37.412 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:37.412 ... 00:32:37.412 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:37.412 ... 00:32:37.412 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:37.412 ... 00:32:37.412 fio-3.35 00:32:37.412 Starting 24 threads 00:32:37.412 EAL: No free 2048 kB hugepages reported on node 1 00:32:49.637 00:32:49.637 filename0: (groupid=0, jobs=1): err= 0: pid=3826808: Wed May 15 19:48:13 2024 00:32:49.637 read: IOPS=493, BW=1974KiB/s (2021kB/s)(19.3MiB/10004msec) 00:32:49.637 slat (usec): min=6, max=169, avg=38.99, stdev=26.87 00:32:49.637 clat (usec): min=10660, max=59818, avg=32087.63, stdev=3527.61 00:32:49.637 lat (usec): min=10684, max=59835, avg=32126.62, stdev=3527.13 00:32:49.637 clat percentiles (usec): 00:32:49.637 | 1.00th=[20841], 5.00th=[28181], 10.00th=[30802], 20.00th=[31327], 00:32:49.637 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:49.637 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33424], 95.00th=[34866], 00:32:49.637 | 99.00th=[45351], 99.50th=[53740], 99.90th=[60031], 99.95th=[60031], 00:32:49.637 | 99.99th=[60031] 00:32:49.637 bw ( KiB/s): min= 1792, max= 2096, per=4.15%, avg=1970.53, stdev=75.80, samples=19 00:32:49.637 iops : min= 448, max= 524, avg=492.63, stdev=18.95, samples=19 00:32:49.637 lat (msec) : 20=0.89%, 50=98.46%, 100=0.65% 00:32:49.637 cpu : usr=99.09%, sys=0.57%, ctx=42, majf=0, minf=68 00:32:49.637 IO depths : 1=4.0%, 2=8.8%, 4=20.2%, 8=58.0%, 16=9.1%, 32=0.0%, >=64=0.0% 00:32:49.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.637 complete : 0=0.0%, 4=93.0%, 8=1.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.637 issued rwts: total=4936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.637 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.637 filename0: (groupid=0, jobs=1): err= 0: pid=3826809: Wed May 15 19:48:13 2024 00:32:49.637 read: IOPS=505, BW=2023KiB/s (2072kB/s)(19.8MiB/10021msec) 00:32:49.637 slat (nsec): min=8296, max=96530, avg=14186.00, stdev=8447.51 00:32:49.637 clat (usec): min=2617, max=39724, avg=31511.17, stdev=3888.39 00:32:49.637 lat (usec): min=2634, max=39732, avg=31525.36, stdev=3887.92 00:32:49.637 clat percentiles (usec): 00:32:49.637 | 1.00th=[ 5800], 5.00th=[30540], 10.00th=[31065], 20.00th=[31589], 00:32:49.637 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:49.637 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:32:49.637 | 99.00th=[34341], 99.50th=[34341], 99.90th=[35914], 99.95th=[39584], 00:32:49.638 | 99.99th=[39584] 00:32:49.638 bw ( KiB/s): min= 1920, max= 2792, per=4.26%, avg=2021.20, stdev=192.36, samples=20 00:32:49.638 iops : min= 480, max= 698, avg=505.30, stdev=48.09, samples=20 00:32:49.638 lat (msec) : 4=0.32%, 10=1.26%, 20=0.93%, 50=97.49% 00:32:49.638 cpu : usr=99.01%, sys=0.65%, ctx=16, majf=0, minf=44 00:32:49.638 IO depths : 1=6.0%, 2=12.0%, 4=24.3%, 8=51.1%, 16=6.6%, 32=0.0%, >=64=0.0% 00:32:49.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.638 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.638 issued rwts: total=5069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.638 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.638 filename0: (groupid=0, jobs=1): err= 0: pid=3826810: Wed May 15 19:48:13 2024 00:32:49.638 read: IOPS=505, BW=2022KiB/s (2071kB/s)(19.8MiB/10016msec) 00:32:49.638 slat (usec): min=6, max=116, avg=35.46, stdev=23.97 00:32:49.638 clat (usec): min=14705, max=52587, avg=31342.52, stdev=3836.73 00:32:49.638 lat (usec): min=14732, max=52597, avg=31377.99, stdev=3840.72 00:32:49.638 clat percentiles (usec): 00:32:49.638 | 1.00th=[19268], 5.00th=[23462], 10.00th=[26870], 20.00th=[31065], 00:32:49.638 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:32:49.638 | 70.00th=[32113], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:32:49.638 | 99.00th=[45876], 99.50th=[50070], 99.90th=[52691], 99.95th=[52691], 00:32:49.638 | 99.99th=[52691] 00:32:49.638 bw ( KiB/s): min= 1891, max= 2464, per=4.26%, avg=2024.58, stdev=129.22, samples=19 00:32:49.638 iops : min= 472, max= 616, avg=506.11, stdev=32.35, samples=19 00:32:49.638 lat (msec) : 20=1.86%, 50=97.59%, 100=0.55% 00:32:49.638 cpu : usr=99.03%, sys=0.61%, ctx=15, majf=0, minf=51 00:32:49.638 IO depths : 1=4.5%, 2=9.1%, 4=21.0%, 8=57.1%, 16=8.3%, 32=0.0%, >=64=0.0% 00:32:49.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.638 complete : 0=0.0%, 4=93.1%, 8=1.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.638 issued rwts: total=5064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.638 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.638 filename0: (groupid=0, jobs=1): err= 0: pid=3826811: Wed May 15 19:48:13 2024 00:32:49.638 read: IOPS=498, BW=1994KiB/s (2042kB/s)(19.5MiB/10013msec) 00:32:49.638 slat (usec): min=8, max=123, avg=21.40, stdev=20.79 00:32:49.638 clat (usec): min=5479, max=51370, avg=31921.57, stdev=2773.74 00:32:49.638 lat (usec): min=5493, max=51400, avg=31942.97, stdev=2773.22 00:32:49.638 clat percentiles (usec): 00:32:49.638 | 1.00th=[18744], 5.00th=[30540], 10.00th=[31065], 20.00th=[31589], 00:32:49.638 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:32:49.638 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:32:49.638 | 99.00th=[34341], 99.50th=[41157], 99.90th=[47449], 99.95th=[49021], 00:32:49.638 | 99.99th=[51119] 00:32:49.638 bw ( KiB/s): min= 1920, max= 2176, per=4.21%, avg=2000.84, stdev=75.14, samples=19 00:32:49.638 iops : min= 480, max= 544, avg=500.21, stdev=18.78, samples=19 00:32:49.638 lat (msec) : 10=0.32%, 20=0.68%, 50=98.96%, 100=0.04% 00:32:49.638 cpu : usr=99.02%, sys=0.63%, ctx=22, majf=0, minf=57 00:32:49.638 IO depths : 1=5.9%, 2=11.9%, 4=24.3%, 8=51.1%, 16=6.8%, 32=0.0%, >=64=0.0% 00:32:49.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.638 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.638 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.638 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.638 filename0: (groupid=0, jobs=1): err= 0: pid=3826812: Wed May 15 19:48:13 2024 00:32:49.638 read: IOPS=498, BW=1995KiB/s (2043kB/s)(19.5MiB/10022msec) 00:32:49.638 slat (usec): min=8, max=172, avg=39.34, stdev=28.21 00:32:49.638 clat (usec): min=10888, max=53292, avg=31736.72, stdev=2496.41 00:32:49.638 lat (usec): min=10898, max=53302, avg=31776.06, stdev=2498.40 00:32:49.638 clat percentiles (usec): 00:32:49.638 | 1.00th=[20841], 5.00th=[29754], 10.00th=[30802], 20.00th=[31327], 00:32:49.638 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:32:49.638 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:32:49.638 | 99.00th=[39060], 99.50th=[41681], 99.90th=[53216], 99.95th=[53216], 00:32:49.638 | 99.99th=[53216] 00:32:49.638 bw ( KiB/s): min= 1920, max= 2096, per=4.21%, avg=1997.05, stdev=69.91, samples=19 00:32:49.638 iops : min= 480, max= 524, avg=499.26, stdev=17.48, samples=19 00:32:49.638 lat (msec) : 20=0.66%, 50=99.22%, 100=0.12% 00:32:49.638 cpu : usr=98.88%, sys=0.79%, ctx=17, majf=0, minf=35 00:32:49.638 IO depths : 1=5.5%, 2=11.1%, 4=22.9%, 8=53.3%, 16=7.2%, 32=0.0%, >=64=0.0% 00:32:49.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.638 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.638 issued rwts: total=4999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.638 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.638 filename0: (groupid=0, jobs=1): err= 0: pid=3826813: Wed May 15 19:48:13 2024 00:32:49.638 read: IOPS=493, BW=1973KiB/s (2021kB/s)(19.3MiB/10018msec) 00:32:49.638 slat (usec): min=7, max=124, avg=28.32, stdev=22.18 00:32:49.638 clat (usec): min=10606, max=56647, avg=32206.36, stdev=4721.04 00:32:49.638 lat (usec): min=10630, max=56657, avg=32234.68, stdev=4720.30 00:32:49.638 clat percentiles (usec): 00:32:49.638 | 1.00th=[17957], 5.00th=[25035], 10.00th=[29230], 20.00th=[31065], 00:32:49.638 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:49.638 | 70.00th=[32637], 80.00th=[32900], 90.00th=[34341], 95.00th=[40633], 00:32:49.638 | 99.00th=[50594], 99.50th=[56361], 99.90th=[56361], 99.95th=[56886], 00:32:49.638 | 99.99th=[56886] 00:32:49.638 bw ( KiB/s): min= 1840, max= 2088, per=4.15%, avg=1971.37, stdev=70.27, samples=19 00:32:49.638 iops : min= 460, max= 522, avg=492.84, stdev=17.57, samples=19 00:32:49.638 lat (msec) : 20=1.46%, 50=97.29%, 100=1.25% 00:32:49.638 cpu : usr=98.89%, sys=0.74%, ctx=18, majf=0, minf=42 00:32:49.638 IO depths : 1=2.7%, 2=5.9%, 4=16.9%, 8=63.5%, 16=11.1%, 32=0.0%, >=64=0.0% 00:32:49.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.638 complete : 0=0.0%, 4=92.3%, 8=3.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.638 issued rwts: total=4942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.638 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.638 filename0: (groupid=0, jobs=1): err= 0: pid=3826815: Wed May 15 19:48:13 2024 00:32:49.638 read: IOPS=494, BW=1977KiB/s (2025kB/s)(19.3MiB/10017msec) 00:32:49.638 slat (usec): min=8, max=126, avg=40.40, stdev=24.00 00:32:49.638 clat (usec): min=16573, max=65223, avg=32013.47, stdev=2547.26 00:32:49.638 lat (usec): min=16585, max=65246, avg=32053.87, stdev=2547.79 00:32:49.638 clat percentiles (usec): 00:32:49.638 | 1.00th=[22938], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:32:49.638 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:32:49.638 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:32:49.638 | 99.00th=[42206], 99.50th=[49021], 99.90th=[52167], 99.95th=[65274], 00:32:49.638 | 99.99th=[65274] 00:32:49.638 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1974.74, stdev=72.99, samples=19 00:32:49.638 iops : min= 448, max= 512, avg=493.68, stdev=18.25, samples=19 00:32:49.638 lat (msec) : 20=0.16%, 50=99.52%, 100=0.32% 00:32:49.638 cpu : usr=99.10%, sys=0.53%, ctx=15, majf=0, minf=59 00:32:49.638 IO depths : 1=5.3%, 2=10.7%, 4=22.5%, 8=54.0%, 16=7.6%, 32=0.0%, >=64=0.0% 00:32:49.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.638 complete : 0=0.0%, 4=93.6%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.638 issued rwts: total=4952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.638 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.638 filename0: (groupid=0, jobs=1): err= 0: pid=3826816: Wed May 15 19:48:13 2024 00:32:49.638 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.4MiB/10003msec) 00:32:49.638 slat (usec): min=8, max=133, avg=28.58, stdev=22.92 00:32:49.638 clat (usec): min=12973, max=58797, avg=31946.96, stdev=4756.43 00:32:49.638 lat (usec): min=13005, max=58822, avg=31975.54, stdev=4758.32 00:32:49.638 clat percentiles (usec): 00:32:49.638 | 1.00th=[17957], 5.00th=[23725], 10.00th=[26608], 20.00th=[31065], 00:32:49.638 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:32:49.638 | 70.00th=[32637], 80.00th=[32900], 90.00th=[34866], 95.00th=[40633], 00:32:49.638 | 99.00th=[46924], 99.50th=[48497], 99.90th=[58459], 99.95th=[58983], 00:32:49.638 | 99.99th=[58983] 00:32:49.638 bw ( KiB/s): min= 1856, max= 2096, per=4.19%, avg=1990.74, stdev=67.48, samples=19 00:32:49.638 iops : min= 464, max= 524, avg=497.68, stdev=16.87, samples=19 00:32:49.638 lat (msec) : 20=2.11%, 50=97.53%, 100=0.36% 00:32:49.638 cpu : usr=98.84%, sys=0.68%, ctx=144, majf=0, minf=34 00:32:49.638 IO depths : 1=2.9%, 2=6.3%, 4=16.8%, 8=63.8%, 16=10.2%, 32=0.0%, >=64=0.0% 00:32:49.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.638 complete : 0=0.0%, 4=92.0%, 8=2.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.638 issued rwts: total=4974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.638 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.638 filename1: (groupid=0, jobs=1): err= 0: pid=3826817: Wed May 15 19:48:13 2024 00:32:49.638 read: IOPS=519, BW=2080KiB/s (2130kB/s)(20.4MiB/10025msec) 00:32:49.638 slat (usec): min=4, max=155, avg=17.84, stdev=18.37 00:32:49.638 clat (usec): min=3306, max=59017, avg=30638.79, stdev=5591.63 00:32:49.638 lat (usec): min=3314, max=59027, avg=30656.63, stdev=5593.32 00:32:49.638 clat percentiles (usec): 00:32:49.638 | 1.00th=[ 5276], 5.00th=[19792], 10.00th=[24249], 20.00th=[30802], 00:32:49.638 | 30.00th=[31327], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:32:49.638 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33162], 95.00th=[34341], 00:32:49.638 | 99.00th=[40633], 99.50th=[42206], 99.90th=[58983], 99.95th=[58983], 00:32:49.638 | 99.99th=[58983] 00:32:49.638 bw ( KiB/s): min= 1920, max= 2608, per=4.38%, avg=2078.40, stdev=166.67, samples=20 00:32:49.638 iops : min= 480, max= 652, avg=519.60, stdev=41.67, samples=20 00:32:49.638 lat (msec) : 4=0.31%, 10=1.53%, 20=3.40%, 50=94.57%, 100=0.19% 00:32:49.638 cpu : usr=99.13%, sys=0.51%, ctx=52, majf=0, minf=56 00:32:49.638 IO depths : 1=4.2%, 2=8.6%, 4=19.1%, 8=59.4%, 16=8.7%, 32=0.0%, >=64=0.0% 00:32:49.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.638 complete : 0=0.0%, 4=92.6%, 8=2.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.638 issued rwts: total=5212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.638 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.638 filename1: (groupid=0, jobs=1): err= 0: pid=3826818: Wed May 15 19:48:13 2024 00:32:49.638 read: IOPS=486, BW=1948KiB/s (1994kB/s)(19.0MiB/10012msec) 00:32:49.638 slat (usec): min=8, max=134, avg=23.38, stdev=19.71 00:32:49.638 clat (usec): min=11393, max=55548, avg=32693.46, stdev=5466.76 00:32:49.638 lat (usec): min=11404, max=55587, avg=32716.84, stdev=5467.05 00:32:49.638 clat percentiles (usec): 00:32:49.639 | 1.00th=[16319], 5.00th=[23725], 10.00th=[28443], 20.00th=[31327], 00:32:49.639 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:32:49.639 | 70.00th=[32637], 80.00th=[33424], 90.00th=[38536], 95.00th=[43254], 00:32:49.639 | 99.00th=[51643], 99.50th=[52691], 99.90th=[55313], 99.95th=[55313], 00:32:49.639 | 99.99th=[55313] 00:32:49.639 bw ( KiB/s): min= 1788, max= 2080, per=4.10%, avg=1944.63, stdev=86.31, samples=19 00:32:49.639 iops : min= 447, max= 520, avg=486.16, stdev=21.58, samples=19 00:32:49.639 lat (msec) : 20=2.09%, 50=95.69%, 100=2.22% 00:32:49.639 cpu : usr=98.99%, sys=0.65%, ctx=16, majf=0, minf=45 00:32:49.639 IO depths : 1=2.5%, 2=5.3%, 4=14.2%, 8=66.4%, 16=11.5%, 32=0.0%, >=64=0.0% 00:32:49.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.639 complete : 0=0.0%, 4=91.6%, 8=4.2%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.639 issued rwts: total=4875,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.639 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.639 filename1: (groupid=0, jobs=1): err= 0: pid=3826819: Wed May 15 19:48:13 2024 00:32:49.639 read: IOPS=494, BW=1980KiB/s (2027kB/s)(19.4MiB/10022msec) 00:32:49.639 slat (usec): min=8, max=185, avg=35.27, stdev=26.91 00:32:49.639 clat (usec): min=13294, max=49400, avg=32040.25, stdev=1701.94 00:32:49.639 lat (usec): min=13303, max=49409, avg=32075.52, stdev=1699.20 00:32:49.639 clat percentiles (usec): 00:32:49.639 | 1.00th=[25822], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:32:49.639 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:32:49.639 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:32:49.639 | 99.00th=[38011], 99.50th=[40633], 99.90th=[47973], 99.95th=[47973], 00:32:49.639 | 99.99th=[49546] 00:32:49.639 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1980.63, stdev=65.66, samples=19 00:32:49.639 iops : min= 480, max= 512, avg=495.16, stdev=16.42, samples=19 00:32:49.639 lat (msec) : 20=0.08%, 50=99.92% 00:32:49.639 cpu : usr=98.78%, sys=0.76%, ctx=27, majf=0, minf=57 00:32:49.639 IO depths : 1=5.7%, 2=11.5%, 4=23.8%, 8=52.0%, 16=7.0%, 32=0.0%, >=64=0.0% 00:32:49.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.639 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.639 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.639 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.639 filename1: (groupid=0, jobs=1): err= 0: pid=3826820: Wed May 15 19:48:13 2024 00:32:49.639 read: IOPS=493, BW=1974KiB/s (2022kB/s)(19.3MiB/10016msec) 00:32:49.639 slat (usec): min=5, max=131, avg=34.97, stdev=22.83 00:32:49.639 clat (usec): min=26947, max=54797, avg=32134.05, stdev=1422.81 00:32:49.639 lat (usec): min=26988, max=54818, avg=32169.02, stdev=1419.31 00:32:49.639 clat percentiles (usec): 00:32:49.639 | 1.00th=[30278], 5.00th=[31065], 10.00th=[31065], 20.00th=[31589], 00:32:49.639 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:49.639 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:32:49.639 | 99.00th=[34341], 99.50th=[34866], 99.90th=[52167], 99.95th=[54789], 00:32:49.639 | 99.99th=[54789] 00:32:49.639 bw ( KiB/s): min= 1795, max= 2048, per=4.16%, avg=1974.05, stdev=77.30, samples=19 00:32:49.639 iops : min= 448, max= 512, avg=493.47, stdev=19.42, samples=19 00:32:49.639 lat (msec) : 50=99.68%, 100=0.32% 00:32:49.639 cpu : usr=99.09%, sys=0.56%, ctx=17, majf=0, minf=64 00:32:49.639 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:49.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.639 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.639 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.639 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.639 filename1: (groupid=0, jobs=1): err= 0: pid=3826821: Wed May 15 19:48:13 2024 00:32:49.639 read: IOPS=480, BW=1922KiB/s (1969kB/s)(18.8MiB/10002msec) 00:32:49.639 slat (usec): min=6, max=120, avg=23.81, stdev=19.44 00:32:49.639 clat (usec): min=10637, max=58369, avg=33162.75, stdev=5306.56 00:32:49.639 lat (usec): min=10673, max=58386, avg=33186.56, stdev=5306.21 00:32:49.639 clat percentiles (usec): 00:32:49.639 | 1.00th=[19268], 5.00th=[26346], 10.00th=[30016], 20.00th=[31327], 00:32:49.639 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:32:49.639 | 70.00th=[32900], 80.00th=[33817], 90.00th=[39060], 95.00th=[43779], 00:32:49.639 | 99.00th=[52167], 99.50th=[54789], 99.90th=[58459], 99.95th=[58459], 00:32:49.639 | 99.99th=[58459] 00:32:49.639 bw ( KiB/s): min= 1788, max= 2024, per=4.04%, avg=1917.68, stdev=71.20, samples=19 00:32:49.639 iops : min= 447, max= 506, avg=479.42, stdev=17.80, samples=19 00:32:49.639 lat (msec) : 20=1.29%, 50=96.96%, 100=1.75% 00:32:49.639 cpu : usr=98.98%, sys=0.67%, ctx=18, majf=0, minf=40 00:32:49.639 IO depths : 1=0.9%, 2=1.9%, 4=8.0%, 8=74.6%, 16=14.7%, 32=0.0%, >=64=0.0% 00:32:49.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.639 complete : 0=0.0%, 4=90.5%, 8=6.6%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.639 issued rwts: total=4807,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.639 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.639 filename1: (groupid=0, jobs=1): err= 0: pid=3826822: Wed May 15 19:48:13 2024 00:32:49.639 read: IOPS=495, BW=1982KiB/s (2030kB/s)(19.4MiB/10023msec) 00:32:49.639 slat (usec): min=8, max=164, avg=33.37, stdev=26.66 00:32:49.639 clat (usec): min=12768, max=53836, avg=31986.63, stdev=2830.76 00:32:49.639 lat (usec): min=12782, max=53871, avg=32020.00, stdev=2831.26 00:32:49.639 clat percentiles (usec): 00:32:49.639 | 1.00th=[21365], 5.00th=[28967], 10.00th=[31065], 20.00th=[31327], 00:32:49.639 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:49.639 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:32:49.639 | 99.00th=[42206], 99.50th=[45351], 99.90th=[50594], 99.95th=[53740], 00:32:49.639 | 99.99th=[53740] 00:32:49.639 bw ( KiB/s): min= 1920, max= 2104, per=4.18%, avg=1983.58, stdev=69.97, samples=19 00:32:49.639 iops : min= 480, max= 526, avg=495.89, stdev=17.49, samples=19 00:32:49.639 lat (msec) : 20=0.48%, 50=99.32%, 100=0.20% 00:32:49.639 cpu : usr=98.93%, sys=0.71%, ctx=70, majf=0, minf=54 00:32:49.639 IO depths : 1=4.9%, 2=10.2%, 4=22.8%, 8=54.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:32:49.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.639 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.639 issued rwts: total=4967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.639 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.639 filename1: (groupid=0, jobs=1): err= 0: pid=3826824: Wed May 15 19:48:13 2024 00:32:49.639 read: IOPS=493, BW=1973KiB/s (2021kB/s)(19.3MiB/10021msec) 00:32:49.639 slat (usec): min=6, max=127, avg=18.75, stdev=15.54 00:32:49.639 clat (usec): min=22071, max=41861, avg=32265.76, stdev=1214.62 00:32:49.639 lat (usec): min=22092, max=41870, avg=32284.51, stdev=1214.03 00:32:49.639 clat percentiles (usec): 00:32:49.639 | 1.00th=[30540], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:32:49.639 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:49.639 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:32:49.639 | 99.00th=[36439], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:32:49.639 | 99.99th=[41681] 00:32:49.639 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=1973.89, stdev=64.93, samples=19 00:32:49.639 iops : min= 480, max= 512, avg=493.47, stdev=16.23, samples=19 00:32:49.639 lat (msec) : 50=100.00% 00:32:49.639 cpu : usr=98.63%, sys=0.75%, ctx=25, majf=0, minf=51 00:32:49.639 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:49.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.639 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.639 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.639 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.639 filename1: (groupid=0, jobs=1): err= 0: pid=3826825: Wed May 15 19:48:13 2024 00:32:49.639 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10002msec) 00:32:49.639 slat (usec): min=6, max=156, avg=25.46, stdev=22.60 00:32:49.639 clat (usec): min=3843, max=62148, avg=32900.77, stdev=5104.40 00:32:49.639 lat (usec): min=3853, max=62166, avg=32926.23, stdev=5103.39 00:32:49.639 clat percentiles (usec): 00:32:49.639 | 1.00th=[20317], 5.00th=[26346], 10.00th=[30278], 20.00th=[31589], 00:32:49.639 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:32:49.639 | 70.00th=[32900], 80.00th=[33424], 90.00th=[38536], 95.00th=[42206], 00:32:49.639 | 99.00th=[52167], 99.50th=[55837], 99.90th=[62129], 99.95th=[62129], 00:32:49.639 | 99.99th=[62129] 00:32:49.639 bw ( KiB/s): min= 1664, max= 2048, per=4.06%, avg=1927.16, stdev=98.91, samples=19 00:32:49.639 iops : min= 416, max= 512, avg=481.79, stdev=24.73, samples=19 00:32:49.639 lat (msec) : 4=0.06%, 10=0.14%, 20=0.78%, 50=97.30%, 100=1.71% 00:32:49.639 cpu : usr=99.09%, sys=0.55%, ctx=56, majf=0, minf=69 00:32:49.639 IO depths : 1=0.8%, 2=2.0%, 4=7.1%, 8=74.7%, 16=15.4%, 32=0.0%, >=64=0.0% 00:32:49.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.639 complete : 0=0.0%, 4=90.5%, 8=7.2%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.639 issued rwts: total=4845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.639 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.639 filename2: (groupid=0, jobs=1): err= 0: pid=3826826: Wed May 15 19:48:13 2024 00:32:49.639 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10003msec) 00:32:49.639 slat (usec): min=7, max=128, avg=31.98, stdev=22.93 00:32:49.639 clat (usec): min=2255, max=56597, avg=32139.84, stdev=3351.64 00:32:49.639 lat (usec): min=2263, max=56618, avg=32171.82, stdev=3351.66 00:32:49.639 clat percentiles (usec): 00:32:49.639 | 1.00th=[22938], 5.00th=[30278], 10.00th=[31065], 20.00th=[31589], 00:32:49.639 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:32:49.639 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[34341], 00:32:49.639 | 99.00th=[41157], 99.50th=[50594], 99.90th=[56361], 99.95th=[56361], 00:32:49.639 | 99.99th=[56361] 00:32:49.639 bw ( KiB/s): min= 1795, max= 2048, per=4.14%, avg=1966.47, stdev=73.88, samples=19 00:32:49.639 iops : min= 448, max= 512, avg=491.58, stdev=18.57, samples=19 00:32:49.639 lat (msec) : 4=0.26%, 10=0.02%, 20=0.38%, 50=98.77%, 100=0.57% 00:32:49.639 cpu : usr=99.15%, sys=0.49%, ctx=16, majf=0, minf=41 00:32:49.639 IO depths : 1=3.0%, 2=6.0%, 4=13.3%, 8=65.6%, 16=12.1%, 32=0.0%, >=64=0.0% 00:32:49.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.639 complete : 0=0.0%, 4=91.7%, 8=5.1%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.639 issued rwts: total=4943,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.639 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.640 filename2: (groupid=0, jobs=1): err= 0: pid=3826827: Wed May 15 19:48:13 2024 00:32:49.640 read: IOPS=495, BW=1982KiB/s (2029kB/s)(19.4MiB/10023msec) 00:32:49.640 slat (usec): min=7, max=121, avg=32.89, stdev=21.65 00:32:49.640 clat (usec): min=12504, max=55335, avg=32015.88, stdev=2596.55 00:32:49.640 lat (usec): min=12515, max=55345, avg=32048.76, stdev=2596.67 00:32:49.640 clat percentiles (usec): 00:32:49.640 | 1.00th=[21627], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:32:49.640 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:49.640 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:32:49.640 | 99.00th=[40633], 99.50th=[46924], 99.90th=[53740], 99.95th=[55313], 00:32:49.640 | 99.99th=[55313] 00:32:49.640 bw ( KiB/s): min= 1872, max= 2096, per=4.17%, avg=1980.63, stdev=71.07, samples=19 00:32:49.640 iops : min= 468, max= 524, avg=495.16, stdev=17.77, samples=19 00:32:49.640 lat (msec) : 20=0.40%, 50=99.48%, 100=0.12% 00:32:49.640 cpu : usr=99.03%, sys=0.60%, ctx=19, majf=0, minf=49 00:32:49.640 IO depths : 1=5.5%, 2=11.3%, 4=23.8%, 8=52.4%, 16=7.1%, 32=0.0%, >=64=0.0% 00:32:49.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.640 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.640 issued rwts: total=4966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.640 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.640 filename2: (groupid=0, jobs=1): err= 0: pid=3826828: Wed May 15 19:48:13 2024 00:32:49.640 read: IOPS=491, BW=1968KiB/s (2015kB/s)(19.2MiB/10001msec) 00:32:49.640 slat (usec): min=6, max=134, avg=23.65, stdev=20.32 00:32:49.640 clat (usec): min=10864, max=61217, avg=32368.58, stdev=5354.12 00:32:49.640 lat (usec): min=10884, max=61234, avg=32392.23, stdev=5354.35 00:32:49.640 clat percentiles (usec): 00:32:49.640 | 1.00th=[14615], 5.00th=[23462], 10.00th=[29230], 20.00th=[31327], 00:32:49.640 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:49.640 | 70.00th=[32637], 80.00th=[33162], 90.00th=[35390], 95.00th=[41157], 00:32:49.640 | 99.00th=[51643], 99.50th=[55837], 99.90th=[61080], 99.95th=[61080], 00:32:49.640 | 99.99th=[61080] 00:32:49.640 bw ( KiB/s): min= 1747, max= 2048, per=4.14%, avg=1963.95, stdev=72.60, samples=19 00:32:49.640 iops : min= 436, max= 512, avg=490.95, stdev=18.27, samples=19 00:32:49.640 lat (msec) : 20=2.78%, 50=95.39%, 100=1.83% 00:32:49.640 cpu : usr=99.02%, sys=0.62%, ctx=16, majf=0, minf=34 00:32:49.640 IO depths : 1=1.8%, 2=4.0%, 4=11.3%, 8=70.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:32:49.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.640 complete : 0=0.0%, 4=91.0%, 8=5.1%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.640 issued rwts: total=4920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.640 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.640 filename2: (groupid=0, jobs=1): err= 0: pid=3826829: Wed May 15 19:48:13 2024 00:32:49.640 read: IOPS=499, BW=2000KiB/s (2048kB/s)(19.6MiB/10014msec) 00:32:49.640 slat (usec): min=7, max=124, avg=29.50, stdev=21.79 00:32:49.640 clat (usec): min=10188, max=55987, avg=31753.54, stdev=3821.12 00:32:49.640 lat (usec): min=10204, max=55998, avg=31783.04, stdev=3822.35 00:32:49.640 clat percentiles (usec): 00:32:49.640 | 1.00th=[20055], 5.00th=[24511], 10.00th=[30278], 20.00th=[31327], 00:32:49.640 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:32:49.640 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:32:49.640 | 99.00th=[47973], 99.50th=[53740], 99.90th=[55837], 99.95th=[55837], 00:32:49.640 | 99.99th=[55837] 00:32:49.640 bw ( KiB/s): min= 1792, max= 2240, per=4.19%, avg=1990.74, stdev=101.10, samples=19 00:32:49.640 iops : min= 448, max= 560, avg=497.68, stdev=25.27, samples=19 00:32:49.640 lat (msec) : 20=0.98%, 50=98.34%, 100=0.68% 00:32:49.640 cpu : usr=98.15%, sys=1.07%, ctx=30, majf=0, minf=55 00:32:49.640 IO depths : 1=4.7%, 2=9.6%, 4=20.3%, 8=57.0%, 16=8.3%, 32=0.0%, >=64=0.0% 00:32:49.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.640 complete : 0=0.0%, 4=93.0%, 8=1.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.640 issued rwts: total=5006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.640 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.640 filename2: (groupid=0, jobs=1): err= 0: pid=3826830: Wed May 15 19:48:13 2024 00:32:49.640 read: IOPS=483, BW=1934KiB/s (1981kB/s)(18.9MiB/10001msec) 00:32:49.640 slat (usec): min=7, max=122, avg=26.33, stdev=22.06 00:32:49.640 clat (usec): min=10614, max=61452, avg=32919.52, stdev=5074.50 00:32:49.640 lat (usec): min=10622, max=61470, avg=32945.85, stdev=5073.57 00:32:49.640 clat percentiles (usec): 00:32:49.640 | 1.00th=[20841], 5.00th=[26084], 10.00th=[28967], 20.00th=[31327], 00:32:49.640 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:32:49.640 | 70.00th=[32900], 80.00th=[33817], 90.00th=[38536], 95.00th=[42206], 00:32:49.640 | 99.00th=[51643], 99.50th=[54264], 99.90th=[61080], 99.95th=[61080], 00:32:49.640 | 99.99th=[61604] 00:32:49.640 bw ( KiB/s): min= 1712, max= 2048, per=4.06%, avg=1928.58, stdev=77.51, samples=19 00:32:49.640 iops : min= 428, max= 512, avg=482.11, stdev=19.44, samples=19 00:32:49.640 lat (msec) : 20=0.70%, 50=97.54%, 100=1.76% 00:32:49.640 cpu : usr=99.10%, sys=0.55%, ctx=13, majf=0, minf=58 00:32:49.640 IO depths : 1=1.4%, 2=3.1%, 4=10.3%, 8=71.6%, 16=13.6%, 32=0.0%, >=64=0.0% 00:32:49.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.640 complete : 0=0.0%, 4=90.8%, 8=5.9%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.640 issued rwts: total=4836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.640 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.640 filename2: (groupid=0, jobs=1): err= 0: pid=3826831: Wed May 15 19:48:13 2024 00:32:49.640 read: IOPS=491, BW=1965KiB/s (2012kB/s)(19.2MiB/10002msec) 00:32:49.640 slat (usec): min=6, max=137, avg=20.17, stdev=18.54 00:32:49.640 clat (usec): min=10710, max=73313, avg=32468.13, stdev=5888.03 00:32:49.640 lat (usec): min=10728, max=73332, avg=32488.31, stdev=5887.56 00:32:49.640 clat percentiles (usec): 00:32:49.640 | 1.00th=[15926], 5.00th=[23200], 10.00th=[26084], 20.00th=[30540], 00:32:49.640 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32637], 00:32:49.640 | 70.00th=[32900], 80.00th=[33817], 90.00th=[39060], 95.00th=[42730], 00:32:49.640 | 99.00th=[52691], 99.50th=[55837], 99.90th=[72877], 99.95th=[72877], 00:32:49.640 | 99.99th=[72877] 00:32:49.640 bw ( KiB/s): min= 1712, max= 2112, per=4.13%, avg=1958.74, stdev=89.57, samples=19 00:32:49.640 iops : min= 428, max= 528, avg=489.68, stdev=22.39, samples=19 00:32:49.640 lat (msec) : 20=1.93%, 50=96.05%, 100=2.01% 00:32:49.640 cpu : usr=99.13%, sys=0.51%, ctx=14, majf=0, minf=64 00:32:49.640 IO depths : 1=0.4%, 2=0.9%, 4=5.5%, 8=77.9%, 16=15.2%, 32=0.0%, >=64=0.0% 00:32:49.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.640 complete : 0=0.0%, 4=89.6%, 8=7.7%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.640 issued rwts: total=4914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.640 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.640 filename2: (groupid=0, jobs=1): err= 0: pid=3826832: Wed May 15 19:48:13 2024 00:32:49.640 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10003msec) 00:32:49.640 slat (usec): min=8, max=108, avg=24.93, stdev=18.67 00:32:49.640 clat (usec): min=13886, max=59929, avg=32567.00, stdev=4845.81 00:32:49.640 lat (usec): min=13905, max=59944, avg=32591.93, stdev=4845.65 00:32:49.640 clat percentiles (usec): 00:32:49.640 | 1.00th=[17957], 5.00th=[25035], 10.00th=[30540], 20.00th=[31327], 00:32:49.640 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:32:49.640 | 70.00th=[32637], 80.00th=[33162], 90.00th=[37487], 95.00th=[41681], 00:32:49.640 | 99.00th=[49546], 99.50th=[52167], 99.90th=[56361], 99.95th=[60031], 00:32:49.640 | 99.99th=[60031] 00:32:49.640 bw ( KiB/s): min= 1840, max= 2048, per=4.11%, avg=1950.32, stdev=65.35, samples=19 00:32:49.640 iops : min= 460, max= 512, avg=487.58, stdev=16.34, samples=19 00:32:49.640 lat (msec) : 20=1.45%, 50=97.58%, 100=0.96% 00:32:49.640 cpu : usr=98.90%, sys=0.73%, ctx=17, majf=0, minf=58 00:32:49.640 IO depths : 1=2.9%, 2=5.9%, 4=16.2%, 8=64.1%, 16=10.8%, 32=0.0%, >=64=0.0% 00:32:49.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.640 complete : 0=0.0%, 4=92.1%, 8=3.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.640 issued rwts: total=4886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.640 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.640 filename2: (groupid=0, jobs=1): err= 0: pid=3826834: Wed May 15 19:48:13 2024 00:32:49.640 read: IOPS=498, BW=1994KiB/s (2042kB/s)(19.5MiB/10014msec) 00:32:49.640 slat (nsec): min=5844, max=53426, avg=12270.88, stdev=6127.18 00:32:49.640 clat (usec): min=5189, max=34597, avg=31985.50, stdev=2380.29 00:32:49.640 lat (usec): min=5203, max=34621, avg=31997.77, stdev=2379.68 00:32:49.640 clat percentiles (usec): 00:32:49.640 | 1.00th=[21103], 5.00th=[30802], 10.00th=[31327], 20.00th=[31589], 00:32:49.640 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:49.640 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:32:49.640 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:32:49.640 | 99.99th=[34341] 00:32:49.640 bw ( KiB/s): min= 1920, max= 2176, per=4.20%, avg=1994.11, stdev=77.69, samples=19 00:32:49.640 iops : min= 480, max= 544, avg=498.53, stdev=19.42, samples=19 00:32:49.640 lat (msec) : 10=0.32%, 20=0.64%, 50=99.04% 00:32:49.640 cpu : usr=99.20%, sys=0.46%, ctx=101, majf=0, minf=47 00:32:49.640 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:49.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.640 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.640 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.640 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:49.640 00:32:49.640 Run status group 0 (all jobs): 00:32:49.640 READ: bw=46.3MiB/s (48.6MB/s), 1922KiB/s-2080KiB/s (1969kB/s-2130kB/s), io=465MiB (487MB), run=10001-10025msec 00:32:49.640 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:49.640 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:49.640 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:49.640 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:49.640 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:49.640 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:49.640 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.640 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:49.640 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.640 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:49.640 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:49.641 bdev_null0 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:49.641 [2024-05-15 19:48:14.207383] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:49.641 bdev_null1 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:49.641 { 00:32:49.641 "params": { 00:32:49.641 "name": "Nvme$subsystem", 00:32:49.641 "trtype": "$TEST_TRANSPORT", 00:32:49.641 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:49.641 "adrfam": "ipv4", 00:32:49.641 "trsvcid": "$NVMF_PORT", 00:32:49.641 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:49.641 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:49.641 "hdgst": ${hdgst:-false}, 00:32:49.641 "ddgst": ${ddgst:-false} 00:32:49.641 }, 00:32:49.641 "method": "bdev_nvme_attach_controller" 00:32:49.641 } 00:32:49.641 EOF 00:32:49.641 )") 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:32:49.641 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:49.642 { 00:32:49.642 "params": { 00:32:49.642 "name": "Nvme$subsystem", 00:32:49.642 "trtype": "$TEST_TRANSPORT", 00:32:49.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:49.642 "adrfam": "ipv4", 00:32:49.642 "trsvcid": "$NVMF_PORT", 00:32:49.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:49.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:49.642 "hdgst": ${hdgst:-false}, 00:32:49.642 "ddgst": ${ddgst:-false} 00:32:49.642 }, 00:32:49.642 "method": "bdev_nvme_attach_controller" 00:32:49.642 } 00:32:49.642 EOF 00:32:49.642 )") 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:49.642 "params": { 00:32:49.642 "name": "Nvme0", 00:32:49.642 "trtype": "tcp", 00:32:49.642 "traddr": "10.0.0.2", 00:32:49.642 "adrfam": "ipv4", 00:32:49.642 "trsvcid": "4420", 00:32:49.642 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:49.642 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:49.642 "hdgst": false, 00:32:49.642 "ddgst": false 00:32:49.642 }, 00:32:49.642 "method": "bdev_nvme_attach_controller" 00:32:49.642 },{ 00:32:49.642 "params": { 00:32:49.642 "name": "Nvme1", 00:32:49.642 "trtype": "tcp", 00:32:49.642 "traddr": "10.0.0.2", 00:32:49.642 "adrfam": "ipv4", 00:32:49.642 "trsvcid": "4420", 00:32:49.642 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:49.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:49.642 "hdgst": false, 00:32:49.642 "ddgst": false 00:32:49.642 }, 00:32:49.642 "method": "bdev_nvme_attach_controller" 00:32:49.642 }' 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:49.642 19:48:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:49.642 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:49.642 ... 00:32:49.642 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:49.642 ... 00:32:49.642 fio-3.35 00:32:49.642 Starting 4 threads 00:32:49.642 EAL: No free 2048 kB hugepages reported on node 1 00:32:54.945 00:32:54.945 filename0: (groupid=0, jobs=1): err= 0: pid=3829707: Wed May 15 19:48:20 2024 00:32:54.945 read: IOPS=2120, BW=16.6MiB/s (17.4MB/s)(82.9MiB/5003msec) 00:32:54.945 slat (nsec): min=8269, max=56726, avg=9344.42, stdev=3194.10 00:32:54.945 clat (usec): min=1674, max=6624, avg=3747.49, stdev=559.12 00:32:54.945 lat (usec): min=1683, max=6633, avg=3756.84, stdev=559.06 00:32:54.945 clat percentiles (usec): 00:32:54.945 | 1.00th=[ 2507], 5.00th=[ 2933], 10.00th=[ 3163], 20.00th=[ 3392], 00:32:54.945 | 30.00th=[ 3523], 40.00th=[ 3621], 50.00th=[ 3752], 60.00th=[ 3785], 00:32:54.945 | 70.00th=[ 3818], 80.00th=[ 4015], 90.00th=[ 4490], 95.00th=[ 4817], 00:32:54.945 | 99.00th=[ 5538], 99.50th=[ 5800], 99.90th=[ 6325], 99.95th=[ 6521], 00:32:54.945 | 99.99th=[ 6587] 00:32:54.945 bw ( KiB/s): min=16640, max=17232, per=25.43%, avg=16958.40, stdev=195.30, samples=10 00:32:54.945 iops : min= 2080, max= 2154, avg=2119.80, stdev=24.41, samples=10 00:32:54.945 lat (msec) : 2=0.23%, 4=79.10%, 10=20.68% 00:32:54.945 cpu : usr=96.58%, sys=3.14%, ctx=10, majf=0, minf=0 00:32:54.945 IO depths : 1=0.4%, 2=1.3%, 4=71.7%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.945 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.945 issued rwts: total=10607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.945 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:54.945 filename0: (groupid=0, jobs=1): err= 0: pid=3829708: Wed May 15 19:48:20 2024 00:32:54.945 read: IOPS=2072, BW=16.2MiB/s (17.0MB/s)(81.0MiB/5002msec) 00:32:54.945 slat (nsec): min=8249, max=56877, avg=9465.54, stdev=3471.06 00:32:54.945 clat (usec): min=1762, max=7038, avg=3833.04, stdev=613.74 00:32:54.945 lat (usec): min=1770, max=7066, avg=3842.50, stdev=613.80 00:32:54.945 clat percentiles (usec): 00:32:54.945 | 1.00th=[ 2573], 5.00th=[ 2999], 10.00th=[ 3195], 20.00th=[ 3425], 00:32:54.945 | 30.00th=[ 3556], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3785], 00:32:54.945 | 70.00th=[ 3851], 80.00th=[ 4178], 90.00th=[ 4686], 95.00th=[ 5080], 00:32:54.945 | 99.00th=[ 5735], 99.50th=[ 5997], 99.90th=[ 6521], 99.95th=[ 6587], 00:32:54.945 | 99.99th=[ 7046] 00:32:54.945 bw ( KiB/s): min=16272, max=16992, per=24.90%, avg=16600.89, stdev=223.01, samples=9 00:32:54.945 iops : min= 2034, max= 2124, avg=2075.11, stdev=27.88, samples=9 00:32:54.945 lat (msec) : 2=0.11%, 4=74.95%, 10=24.94% 00:32:54.945 cpu : usr=96.08%, sys=3.62%, ctx=9, majf=0, minf=9 00:32:54.945 IO depths : 1=0.3%, 2=1.1%, 4=71.3%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.945 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.945 issued rwts: total=10369,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.945 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:54.945 filename1: (groupid=0, jobs=1): err= 0: pid=3829709: Wed May 15 19:48:20 2024 00:32:54.945 read: IOPS=2081, BW=16.3MiB/s (17.1MB/s)(81.4MiB/5002msec) 00:32:54.945 slat (nsec): min=8260, max=38944, avg=9356.13, stdev=3232.08 00:32:54.945 clat (usec): min=1952, max=45053, avg=3817.47, stdev=1279.72 00:32:54.945 lat (usec): min=1960, max=45087, avg=3826.82, stdev=1279.83 00:32:54.945 clat percentiles (usec): 00:32:54.945 | 1.00th=[ 2606], 5.00th=[ 2933], 10.00th=[ 3163], 20.00th=[ 3425], 00:32:54.945 | 30.00th=[ 3523], 40.00th=[ 3654], 50.00th=[ 3752], 60.00th=[ 3785], 00:32:54.945 | 70.00th=[ 3818], 80.00th=[ 4113], 90.00th=[ 4555], 95.00th=[ 5014], 00:32:54.945 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 6325], 99.95th=[44827], 00:32:54.945 | 99.99th=[44827] 00:32:54.945 bw ( KiB/s): min=15374, max=16960, per=24.97%, avg=16652.60, stdev=470.53, samples=10 00:32:54.945 iops : min= 1921, max= 2120, avg=2081.50, stdev=59.04, samples=10 00:32:54.945 lat (msec) : 2=0.03%, 4=77.49%, 10=22.40%, 50=0.08% 00:32:54.945 cpu : usr=96.12%, sys=3.58%, ctx=8, majf=0, minf=0 00:32:54.945 IO depths : 1=0.3%, 2=1.1%, 4=70.3%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.945 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.945 issued rwts: total=10414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.945 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:54.945 filename1: (groupid=0, jobs=1): err= 0: pid=3829710: Wed May 15 19:48:20 2024 00:32:54.945 read: IOPS=2061, BW=16.1MiB/s (16.9MB/s)(80.5MiB/5001msec) 00:32:54.945 slat (nsec): min=8255, max=36998, avg=9422.22, stdev=3304.15 00:32:54.945 clat (usec): min=1992, max=6497, avg=3855.83, stdev=604.72 00:32:54.945 lat (usec): min=2000, max=6506, avg=3865.25, stdev=604.66 00:32:54.945 clat percentiles (usec): 00:32:54.945 | 1.00th=[ 2704], 5.00th=[ 3064], 10.00th=[ 3228], 20.00th=[ 3458], 00:32:54.945 | 30.00th=[ 3556], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3785], 00:32:54.945 | 70.00th=[ 3884], 80.00th=[ 4228], 90.00th=[ 4686], 95.00th=[ 5211], 00:32:54.945 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 6259], 99.95th=[ 6390], 00:32:54.945 | 99.99th=[ 6521] 00:32:54.945 bw ( KiB/s): min=16272, max=16704, per=24.71%, avg=16478.22, stdev=157.47, samples=9 00:32:54.945 iops : min= 2034, max= 2088, avg=2059.78, stdev=19.68, samples=9 00:32:54.945 lat (msec) : 2=0.02%, 4=74.29%, 10=25.69% 00:32:54.945 cpu : usr=96.48%, sys=3.22%, ctx=9, majf=0, minf=9 00:32:54.945 IO depths : 1=0.3%, 2=1.4%, 4=69.9%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.945 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.945 issued rwts: total=10308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.945 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:54.945 00:32:54.945 Run status group 0 (all jobs): 00:32:54.945 READ: bw=65.1MiB/s (68.3MB/s), 16.1MiB/s-16.6MiB/s (16.9MB/s-17.4MB/s), io=326MiB (342MB), run=5001-5003msec 00:32:54.945 19:48:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:54.945 19:48:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:54.945 19:48:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:54.945 19:48:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.946 00:32:54.946 real 0m24.393s 00:32:54.946 user 5m19.533s 00:32:54.946 sys 0m3.912s 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:54.946 19:48:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:54.946 ************************************ 00:32:54.946 END TEST fio_dif_rand_params 00:32:54.946 ************************************ 00:32:54.946 19:48:20 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:54.946 19:48:20 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:54.946 19:48:20 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:54.946 19:48:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:54.946 ************************************ 00:32:54.946 START TEST fio_dif_digest 00:32:54.946 ************************************ 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:54.946 bdev_null0 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:54.946 [2024-05-15 19:48:20.804046] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:54.946 { 00:32:54.946 "params": { 00:32:54.946 "name": "Nvme$subsystem", 00:32:54.946 "trtype": "$TEST_TRANSPORT", 00:32:54.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:54.946 "adrfam": "ipv4", 00:32:54.946 "trsvcid": "$NVMF_PORT", 00:32:54.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:54.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:54.946 "hdgst": ${hdgst:-false}, 00:32:54.946 "ddgst": ${ddgst:-false} 00:32:54.946 }, 00:32:54.946 "method": "bdev_nvme_attach_controller" 00:32:54.946 } 00:32:54.946 EOF 00:32:54.946 )") 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:54.946 "params": { 00:32:54.946 "name": "Nvme0", 00:32:54.946 "trtype": "tcp", 00:32:54.946 "traddr": "10.0.0.2", 00:32:54.946 "adrfam": "ipv4", 00:32:54.946 "trsvcid": "4420", 00:32:54.946 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:54.946 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:54.946 "hdgst": true, 00:32:54.946 "ddgst": true 00:32:54.946 }, 00:32:54.946 "method": "bdev_nvme_attach_controller" 00:32:54.946 }' 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:54.946 19:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:55.212 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:55.212 ... 00:32:55.212 fio-3.35 00:32:55.212 Starting 3 threads 00:32:55.212 EAL: No free 2048 kB hugepages reported on node 1 00:33:07.468 00:33:07.468 filename0: (groupid=0, jobs=1): err= 0: pid=3831023: Wed May 15 19:48:31 2024 00:33:07.468 read: IOPS=187, BW=23.5MiB/s (24.6MB/s)(236MiB/10049msec) 00:33:07.468 slat (nsec): min=8567, max=62046, avg=9423.26, stdev=1732.75 00:33:07.468 clat (msec): min=7, max=137, avg=15.93, stdev=10.56 00:33:07.468 lat (msec): min=7, max=137, avg=15.93, stdev=10.56 00:33:07.468 clat percentiles (msec): 00:33:07.468 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:33:07.468 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 15], 00:33:07.468 | 70.00th=[ 15], 80.00th=[ 16], 90.00th=[ 17], 95.00th=[ 52], 00:33:07.468 | 99.00th=[ 56], 99.50th=[ 57], 99.90th=[ 97], 99.95th=[ 138], 00:33:07.468 | 99.99th=[ 138] 00:33:07.468 bw ( KiB/s): min=18688, max=28928, per=32.45%, avg=24153.60, stdev=2914.44, samples=20 00:33:07.468 iops : min= 146, max= 226, avg=188.70, stdev=22.77, samples=20 00:33:07.468 lat (msec) : 10=2.96%, 20=91.32%, 50=0.16%, 100=5.51%, 250=0.05% 00:33:07.468 cpu : usr=95.77%, sys=3.98%, ctx=18, majf=0, minf=133 00:33:07.468 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:07.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.468 issued rwts: total=1889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:07.468 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:07.468 filename0: (groupid=0, jobs=1): err= 0: pid=3831024: Wed May 15 19:48:31 2024 00:33:07.468 read: IOPS=192, BW=24.0MiB/s (25.2MB/s)(242MiB/10048msec) 00:33:07.468 slat (nsec): min=8565, max=31023, avg=9539.44, stdev=1090.64 00:33:07.468 clat (usec): min=6900, max=96161, avg=15567.13, stdev=9722.91 00:33:07.468 lat (usec): min=6909, max=96170, avg=15576.67, stdev=9722.92 00:33:07.468 clat percentiles (usec): 00:33:07.468 | 1.00th=[ 8225], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[11338], 00:33:07.468 | 30.00th=[12649], 40.00th=[13435], 50.00th=[13829], 60.00th=[14353], 00:33:07.468 | 70.00th=[14746], 80.00th=[15401], 90.00th=[16319], 95.00th=[48497], 00:33:07.468 | 99.00th=[56361], 99.50th=[57410], 99.90th=[95945], 99.95th=[95945], 00:33:07.468 | 99.99th=[95945] 00:33:07.468 bw ( KiB/s): min=17920, max=31232, per=33.19%, avg=24704.00, stdev=2999.85, samples=20 00:33:07.468 iops : min= 140, max= 244, avg=193.00, stdev=23.44, samples=20 00:33:07.468 lat (msec) : 10=6.37%, 20=88.10%, 50=0.57%, 100=4.97% 00:33:07.468 cpu : usr=95.06%, sys=4.35%, ctx=521, majf=0, minf=117 00:33:07.468 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:07.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.468 issued rwts: total=1932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:07.468 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:07.468 filename0: (groupid=0, jobs=1): err= 0: pid=3831025: Wed May 15 19:48:31 2024 00:33:07.468 read: IOPS=201, BW=25.2MiB/s (26.4MB/s)(253MiB/10046msec) 00:33:07.468 slat (nsec): min=8545, max=34686, avg=9421.59, stdev=1377.95 00:33:07.468 clat (usec): min=7172, max=96938, avg=14871.72, stdev=8406.91 00:33:07.468 lat (usec): min=7181, max=96947, avg=14881.14, stdev=8406.95 00:33:07.468 clat percentiles (usec): 00:33:07.468 | 1.00th=[ 7963], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[11338], 00:33:07.468 | 30.00th=[12387], 40.00th=[13304], 50.00th=[13960], 60.00th=[14353], 00:33:07.468 | 70.00th=[14746], 80.00th=[15270], 90.00th=[16188], 95.00th=[18220], 00:33:07.468 | 99.00th=[55837], 99.50th=[56361], 99.90th=[92799], 99.95th=[92799], 00:33:07.468 | 99.99th=[96994] 00:33:07.468 bw ( KiB/s): min=21248, max=29440, per=34.74%, avg=25856.00, stdev=2524.73, samples=20 00:33:07.468 iops : min= 166, max= 230, avg=202.00, stdev=19.72, samples=20 00:33:07.468 lat (msec) : 10=8.06%, 20=88.28%, 50=0.15%, 100=3.51% 00:33:07.468 cpu : usr=95.88%, sys=3.87%, ctx=21, majf=0, minf=106 00:33:07.468 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:07.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.468 issued rwts: total=2022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:07.468 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:07.468 00:33:07.468 Run status group 0 (all jobs): 00:33:07.469 READ: bw=72.7MiB/s (76.2MB/s), 23.5MiB/s-25.2MiB/s (24.6MB/s-26.4MB/s), io=730MiB (766MB), run=10046-10049msec 00:33:07.469 19:48:32 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:07.469 19:48:32 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:33:07.469 19:48:32 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:33:07.469 19:48:32 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:07.469 19:48:32 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:33:07.469 19:48:32 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:07.469 19:48:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.469 19:48:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:07.469 19:48:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.469 19:48:32 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:07.469 19:48:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.469 19:48:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:07.469 19:48:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.469 00:33:07.469 real 0m11.338s 00:33:07.469 user 0m46.081s 00:33:07.469 sys 0m1.594s 00:33:07.469 19:48:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:07.469 19:48:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:07.469 ************************************ 00:33:07.469 END TEST fio_dif_digest 00:33:07.469 ************************************ 00:33:07.469 19:48:32 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:07.469 19:48:32 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:33:07.469 19:48:32 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:07.469 19:48:32 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:33:07.469 19:48:32 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:07.469 19:48:32 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:33:07.469 19:48:32 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:07.469 19:48:32 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:07.469 rmmod nvme_tcp 00:33:07.469 rmmod nvme_fabrics 00:33:07.469 rmmod nvme_keyring 00:33:07.469 19:48:32 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:07.469 19:48:32 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:33:07.469 19:48:32 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:33:07.469 19:48:32 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3820185 ']' 00:33:07.469 19:48:32 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3820185 00:33:07.469 19:48:32 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 3820185 ']' 00:33:07.469 19:48:32 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 3820185 00:33:07.469 19:48:32 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:33:07.469 19:48:32 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:07.469 19:48:32 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3820185 00:33:07.469 19:48:32 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:07.469 19:48:32 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:07.469 19:48:32 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3820185' 00:33:07.469 killing process with pid 3820185 00:33:07.469 19:48:32 nvmf_dif -- common/autotest_common.sh@965 -- # kill 3820185 00:33:07.469 [2024-05-15 19:48:32.272490] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:07.469 19:48:32 nvmf_dif -- common/autotest_common.sh@970 -- # wait 3820185 00:33:07.469 19:48:32 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:33:07.469 19:48:32 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:10.016 Waiting for block devices as requested 00:33:10.016 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:10.016 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:10.277 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:10.277 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:10.277 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:10.537 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:10.537 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:10.537 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:10.797 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:10.797 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:11.057 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:11.057 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:11.057 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:11.316 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:11.316 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:11.316 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:11.576 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:11.836 19:48:37 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:11.836 19:48:37 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:11.836 19:48:37 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:11.836 19:48:37 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:11.836 19:48:37 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.836 19:48:37 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:11.836 19:48:37 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:13.746 19:48:39 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:13.746 00:33:13.746 real 1m19.800s 00:33:13.746 user 8m3.314s 00:33:13.746 sys 0m21.292s 00:33:13.746 19:48:39 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:13.746 19:48:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:13.746 ************************************ 00:33:13.746 END TEST nvmf_dif 00:33:13.746 ************************************ 00:33:13.746 19:48:39 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:13.747 19:48:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:13.747 19:48:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:13.747 19:48:39 -- common/autotest_common.sh@10 -- # set +x 00:33:14.008 ************************************ 00:33:14.008 START TEST nvmf_abort_qd_sizes 00:33:14.008 ************************************ 00:33:14.008 19:48:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:14.008 * Looking for test storage... 00:33:14.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.008 19:48:40 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:33:14.009 19:48:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:22.153 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:22.154 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:22.154 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:22.154 Found net devices under 0000:31:00.0: cvl_0_0 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:22.154 Found net devices under 0000:31:00.1: cvl_0_1 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:22.154 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:22.415 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:22.415 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:22.415 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:22.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:22.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:33:22.415 00:33:22.415 --- 10.0.0.2 ping statistics --- 00:33:22.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:22.415 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:33:22.415 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:22.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:22.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:33:22.415 00:33:22.415 --- 10.0.0.1 ping statistics --- 00:33:22.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:22.415 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:33:22.415 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:22.415 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:33:22.415 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:33:22.415 19:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:26.626 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:26.626 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:26.626 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:26.626 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:26.626 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:26.626 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:26.626 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:26.626 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:26.626 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:26.626 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:26.626 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:26.626 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:26.626 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:26.626 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:26.626 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:26.626 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:26.626 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:26.892 19:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:26.892 19:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:26.892 19:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:26.893 19:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:26.893 19:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:26.893 19:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:26.893 19:48:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:33:26.893 19:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:26.893 19:48:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:26.893 19:48:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:26.893 19:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3841315 00:33:26.893 19:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3841315 00:33:26.893 19:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:26.893 19:48:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 3841315 ']' 00:33:26.893 19:48:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.893 19:48:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:26.893 19:48:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.893 19:48:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:26.893 19:48:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:26.893 [2024-05-15 19:48:52.949106] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:33:26.893 [2024-05-15 19:48:52.949174] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:26.893 EAL: No free 2048 kB hugepages reported on node 1 00:33:26.893 [2024-05-15 19:48:53.041823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:27.159 [2024-05-15 19:48:53.140365] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:27.159 [2024-05-15 19:48:53.140426] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:27.159 [2024-05-15 19:48:53.140435] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:27.159 [2024-05-15 19:48:53.140442] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:27.159 [2024-05-15 19:48:53.140448] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:27.159 [2024-05-15 19:48:53.140588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.159 [2024-05-15 19:48:53.140718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:27.159 [2024-05-15 19:48:53.140884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:27.159 [2024-05-15 19:48:53.140885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:27.730 19:48:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:27.990 ************************************ 00:33:27.990 START TEST spdk_target_abort 00:33:27.990 ************************************ 00:33:27.990 19:48:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:33:27.990 19:48:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:27.990 19:48:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:33:27.990 19:48:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.990 19:48:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:28.251 spdk_targetn1 00:33:28.251 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.251 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:28.251 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.251 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:28.251 [2024-05-15 19:48:54.238572] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:28.251 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.251 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:33:28.251 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:28.252 [2024-05-15 19:48:54.278622] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:28.252 [2024-05-15 19:48:54.278847] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:28.252 19:48:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:28.252 EAL: No free 2048 kB hugepages reported on node 1 00:33:28.512 [2024-05-15 19:48:54.444385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:400 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:33:28.512 [2024-05-15 19:48:54.444410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0034 p:1 m:0 dnr:0 00:33:28.512 [2024-05-15 19:48:54.445844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:512 len:8 PRP1 0x2000078be000 PRP2 0x0 00:33:28.513 [2024-05-15 19:48:54.445863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0041 p:1 m:0 dnr:0 00:33:28.513 [2024-05-15 19:48:54.513849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2744 len:8 PRP1 0x2000078be000 PRP2 0x0 00:33:28.513 [2024-05-15 19:48:54.513870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:28.513 [2024-05-15 19:48:54.520812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2992 len:8 PRP1 0x2000078c8000 PRP2 0x0 00:33:28.513 [2024-05-15 19:48:54.520832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:31.902 Initializing NVMe Controllers 00:33:31.902 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:31.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:31.902 Initialization complete. Launching workers. 00:33:31.903 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13028, failed: 4 00:33:31.903 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 4649, failed to submit 8383 00:33:31.903 success 646, unsuccess 4003, failed 0 00:33:31.903 19:48:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:31.903 19:48:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:31.903 EAL: No free 2048 kB hugepages reported on node 1 00:33:33.817 [2024-05-15 19:48:59.662559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:46088 len:8 PRP1 0x200007c3a000 PRP2 0x0 00:33:33.817 [2024-05-15 19:48:59.662606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:0085 p:1 m:0 dnr:0 00:33:34.759 Initializing NVMe Controllers 00:33:34.759 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:34.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:34.759 Initialization complete. Launching workers. 00:33:34.759 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8576, failed: 1 00:33:34.759 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1247, failed to submit 7330 00:33:34.759 success 303, unsuccess 944, failed 0 00:33:34.760 19:49:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:34.760 19:49:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:34.760 EAL: No free 2048 kB hugepages reported on node 1 00:33:36.145 [2024-05-15 19:49:02.062754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:176 nsid:1 lba:123328 len:8 PRP1 0x20000791e000 PRP2 0x0 00:33:36.145 [2024-05-15 19:49:02.062792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:176 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:38.061 Initializing NVMe Controllers 00:33:38.061 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:38.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:38.061 Initialization complete. Launching workers. 00:33:38.061 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42243, failed: 1 00:33:38.061 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2393, failed to submit 39851 00:33:38.061 success 573, unsuccess 1820, failed 0 00:33:38.061 19:49:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:38.061 19:49:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.061 19:49:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:38.061 19:49:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.061 19:49:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:38.061 19:49:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.061 19:49:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:39.975 19:49:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.975 19:49:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3841315 00:33:39.975 19:49:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 3841315 ']' 00:33:39.975 19:49:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 3841315 00:33:39.975 19:49:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:33:39.975 19:49:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:39.975 19:49:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3841315 00:33:39.975 19:49:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:39.975 19:49:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:39.975 19:49:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3841315' 00:33:39.975 killing process with pid 3841315 00:33:39.975 19:49:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 3841315 00:33:39.975 [2024-05-15 19:49:05.903430] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:39.975 19:49:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 3841315 00:33:39.975 00:33:39.975 real 0m12.114s 00:33:39.975 user 0m49.396s 00:33:39.975 sys 0m1.910s 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:39.975 ************************************ 00:33:39.975 END TEST spdk_target_abort 00:33:39.975 ************************************ 00:33:39.975 19:49:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:39.975 19:49:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:39.975 19:49:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:39.975 19:49:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:39.975 ************************************ 00:33:39.975 START TEST kernel_target_abort 00:33:39.975 ************************************ 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:39.975 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:40.237 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:40.237 19:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:44.447 Waiting for block devices as requested 00:33:44.447 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:44.447 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:44.447 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:44.447 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:44.447 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:44.447 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:44.447 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:44.447 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:44.708 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:44.708 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:44.969 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:44.969 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:44.969 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:45.230 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:45.230 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:45.230 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:45.491 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:45.752 No valid GPT data, bailing 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:33:45.752 00:33:45.752 Discovery Log Number of Records 2, Generation counter 2 00:33:45.752 =====Discovery Log Entry 0====== 00:33:45.752 trtype: tcp 00:33:45.752 adrfam: ipv4 00:33:45.752 subtype: current discovery subsystem 00:33:45.752 treq: not specified, sq flow control disable supported 00:33:45.752 portid: 1 00:33:45.752 trsvcid: 4420 00:33:45.752 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:45.752 traddr: 10.0.0.1 00:33:45.752 eflags: none 00:33:45.752 sectype: none 00:33:45.752 =====Discovery Log Entry 1====== 00:33:45.752 trtype: tcp 00:33:45.752 adrfam: ipv4 00:33:45.752 subtype: nvme subsystem 00:33:45.752 treq: not specified, sq flow control disable supported 00:33:45.752 portid: 1 00:33:45.752 trsvcid: 4420 00:33:45.752 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:45.752 traddr: 10.0.0.1 00:33:45.752 eflags: none 00:33:45.752 sectype: none 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:45.752 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:45.753 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:45.753 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:45.753 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:45.753 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:45.753 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:46.013 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:46.013 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:46.013 19:49:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:46.013 EAL: No free 2048 kB hugepages reported on node 1 00:33:49.317 Initializing NVMe Controllers 00:33:49.317 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:49.317 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:49.317 Initialization complete. Launching workers. 00:33:49.317 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 52601, failed: 0 00:33:49.317 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 52601, failed to submit 0 00:33:49.317 success 0, unsuccess 52601, failed 0 00:33:49.317 19:49:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:49.317 19:49:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:49.317 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.619 Initializing NVMe Controllers 00:33:52.619 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:52.619 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:52.619 Initialization complete. Launching workers. 00:33:52.619 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 91359, failed: 0 00:33:52.619 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23010, failed to submit 68349 00:33:52.619 success 0, unsuccess 23010, failed 0 00:33:52.619 19:49:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:52.619 19:49:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:52.619 EAL: No free 2048 kB hugepages reported on node 1 00:33:55.166 Initializing NVMe Controllers 00:33:55.166 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:55.166 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:55.166 Initialization complete. Launching workers. 00:33:55.166 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 91184, failed: 0 00:33:55.166 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22802, failed to submit 68382 00:33:55.166 success 0, unsuccess 22802, failed 0 00:33:55.166 19:49:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:55.166 19:49:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:55.166 19:49:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:33:55.166 19:49:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:55.166 19:49:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:55.166 19:49:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:55.166 19:49:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:55.166 19:49:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:55.166 19:49:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:55.166 19:49:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:59.374 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:59.374 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:59.374 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:59.374 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:59.374 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:59.374 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:59.374 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:59.374 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:59.374 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:59.374 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:59.374 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:59.374 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:59.374 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:59.374 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:59.374 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:59.375 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:34:01.291 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:34:01.291 00:34:01.291 real 0m21.210s 00:34:01.291 user 0m8.914s 00:34:01.291 sys 0m6.998s 00:34:01.291 19:49:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:01.291 19:49:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:01.291 ************************************ 00:34:01.291 END TEST kernel_target_abort 00:34:01.291 ************************************ 00:34:01.291 19:49:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:01.291 19:49:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:01.291 19:49:27 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:01.291 19:49:27 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:34:01.291 19:49:27 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:01.291 19:49:27 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:34:01.291 19:49:27 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:01.291 19:49:27 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:01.291 rmmod nvme_tcp 00:34:01.291 rmmod nvme_fabrics 00:34:01.291 rmmod nvme_keyring 00:34:01.291 19:49:27 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:01.291 19:49:27 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:34:01.291 19:49:27 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:34:01.291 19:49:27 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3841315 ']' 00:34:01.291 19:49:27 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3841315 00:34:01.291 19:49:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 3841315 ']' 00:34:01.291 19:49:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 3841315 00:34:01.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3841315) - No such process 00:34:01.291 19:49:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 3841315 is not found' 00:34:01.291 Process with pid 3841315 is not found 00:34:01.291 19:49:27 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:34:01.291 19:49:27 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:05.495 Waiting for block devices as requested 00:34:05.495 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:05.495 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:05.495 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:05.495 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:05.495 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:05.495 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:05.495 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:05.756 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:05.756 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:34:06.016 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:06.016 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:06.016 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:06.276 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:06.276 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:06.276 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:06.537 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:06.537 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:06.798 19:49:32 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:06.798 19:49:32 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:06.798 19:49:32 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:06.798 19:49:32 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:06.798 19:49:32 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.798 19:49:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:06.798 19:49:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.344 19:49:34 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:09.344 00:34:09.344 real 0m54.933s 00:34:09.344 user 1m4.055s 00:34:09.344 sys 0m21.305s 00:34:09.344 19:49:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:09.344 19:49:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:09.344 ************************************ 00:34:09.344 END TEST nvmf_abort_qd_sizes 00:34:09.344 ************************************ 00:34:09.344 19:49:34 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:09.344 19:49:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:09.344 19:49:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:09.344 19:49:34 -- common/autotest_common.sh@10 -- # set +x 00:34:09.344 ************************************ 00:34:09.344 START TEST keyring_file 00:34:09.344 ************************************ 00:34:09.344 19:49:35 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:09.344 * Looking for test storage... 00:34:09.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:09.344 19:49:35 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:09.344 19:49:35 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:09.344 19:49:35 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:09.344 19:49:35 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:09.344 19:49:35 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:09.344 19:49:35 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.344 19:49:35 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.344 19:49:35 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.344 19:49:35 keyring_file -- paths/export.sh@5 -- # export PATH 00:34:09.344 19:49:35 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@47 -- # : 0 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:09.344 19:49:35 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:09.344 19:49:35 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:09.344 19:49:35 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:09.344 19:49:35 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:34:09.344 19:49:35 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:34:09.344 19:49:35 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:34:09.344 19:49:35 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:09.344 19:49:35 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:09.344 19:49:35 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:09.344 19:49:35 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:09.344 19:49:35 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:09.344 19:49:35 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:09.344 19:49:35 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yJC3CE7YcH 00:34:09.344 19:49:35 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:34:09.344 19:49:35 keyring_file -- nvmf/common.sh@705 -- # python - 00:34:09.345 19:49:35 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yJC3CE7YcH 00:34:09.345 19:49:35 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yJC3CE7YcH 00:34:09.345 19:49:35 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.yJC3CE7YcH 00:34:09.345 19:49:35 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:34:09.345 19:49:35 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:09.345 19:49:35 keyring_file -- keyring/common.sh@17 -- # name=key1 00:34:09.345 19:49:35 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:09.345 19:49:35 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:09.345 19:49:35 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:09.345 19:49:35 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.WjZtWghQaH 00:34:09.345 19:49:35 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:09.345 19:49:35 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:09.345 19:49:35 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:34:09.345 19:49:35 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:09.345 19:49:35 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:34:09.345 19:49:35 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:34:09.345 19:49:35 keyring_file -- nvmf/common.sh@705 -- # python - 00:34:09.345 19:49:35 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WjZtWghQaH 00:34:09.345 19:49:35 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.WjZtWghQaH 00:34:09.345 19:49:35 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.WjZtWghQaH 00:34:09.345 19:49:35 keyring_file -- keyring/file.sh@30 -- # tgtpid=3852152 00:34:09.345 19:49:35 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3852152 00:34:09.345 19:49:35 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:09.345 19:49:35 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3852152 ']' 00:34:09.345 19:49:35 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:09.345 19:49:35 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:09.345 19:49:35 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:09.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:09.345 19:49:35 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:09.345 19:49:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:09.345 [2024-05-15 19:49:35.329394] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:34:09.345 [2024-05-15 19:49:35.329462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852152 ] 00:34:09.345 EAL: No free 2048 kB hugepages reported on node 1 00:34:09.345 [2024-05-15 19:49:35.419791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:09.345 [2024-05-15 19:49:35.515536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:34:10.287 19:49:36 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:10.287 [2024-05-15 19:49:36.169835] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:10.287 null0 00:34:10.287 [2024-05-15 19:49:36.201864] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:10.287 [2024-05-15 19:49:36.201934] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:10.287 [2024-05-15 19:49:36.202404] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:10.287 [2024-05-15 19:49:36.209912] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.287 19:49:36 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:10.287 [2024-05-15 19:49:36.225941] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:34:10.287 request: 00:34:10.287 { 00:34:10.287 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:34:10.287 "secure_channel": false, 00:34:10.287 "listen_address": { 00:34:10.287 "trtype": "tcp", 00:34:10.287 "traddr": "127.0.0.1", 00:34:10.287 "trsvcid": "4420" 00:34:10.287 }, 00:34:10.287 "method": "nvmf_subsystem_add_listener", 00:34:10.287 "req_id": 1 00:34:10.287 } 00:34:10.287 Got JSON-RPC error response 00:34:10.287 response: 00:34:10.287 { 00:34:10.287 "code": -32602, 00:34:10.287 "message": "Invalid parameters" 00:34:10.287 } 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:10.287 19:49:36 keyring_file -- keyring/file.sh@46 -- # bperfpid=3852198 00:34:10.287 19:49:36 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3852198 /var/tmp/bperf.sock 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3852198 ']' 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:10.287 19:49:36 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:10.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:10.287 19:49:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:10.287 [2024-05-15 19:49:36.283769] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:34:10.287 [2024-05-15 19:49:36.283835] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852198 ] 00:34:10.287 EAL: No free 2048 kB hugepages reported on node 1 00:34:10.287 [2024-05-15 19:49:36.353916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.287 [2024-05-15 19:49:36.429621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:10.547 19:49:36 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:10.547 19:49:36 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:34:10.547 19:49:36 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yJC3CE7YcH 00:34:10.547 19:49:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yJC3CE7YcH 00:34:10.547 19:49:36 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.WjZtWghQaH 00:34:10.547 19:49:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.WjZtWghQaH 00:34:10.807 19:49:36 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:34:10.807 19:49:36 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:34:10.807 19:49:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:10.807 19:49:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:10.807 19:49:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:11.068 19:49:37 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.yJC3CE7YcH == \/\t\m\p\/\t\m\p\.\y\J\C\3\C\E\7\Y\c\H ]] 00:34:11.068 19:49:37 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:34:11.068 19:49:37 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:34:11.068 19:49:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:11.068 19:49:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:11.068 19:49:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:11.328 19:49:37 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.WjZtWghQaH == \/\t\m\p\/\t\m\p\.\W\j\Z\t\W\g\h\Q\a\H ]] 00:34:11.328 19:49:37 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:34:11.328 19:49:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:11.328 19:49:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:11.328 19:49:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:11.328 19:49:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:11.328 19:49:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:11.588 19:49:37 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:34:11.588 19:49:37 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:34:11.588 19:49:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:11.588 19:49:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:11.588 19:49:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:11.588 19:49:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:11.588 19:49:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:11.588 19:49:37 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:34:11.588 19:49:37 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:11.588 19:49:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:11.848 [2024-05-15 19:49:37.954268] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:11.848 nvme0n1 00:34:12.151 19:49:38 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:34:12.151 19:49:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:12.151 19:49:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:12.151 19:49:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:12.151 19:49:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:12.151 19:49:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:12.151 19:49:38 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:34:12.151 19:49:38 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:34:12.151 19:49:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:12.151 19:49:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:12.151 19:49:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:12.151 19:49:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:12.151 19:49:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:12.439 19:49:38 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:34:12.439 19:49:38 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:12.439 Running I/O for 1 seconds... 00:34:13.826 00:34:13.827 Latency(us) 00:34:13.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:13.827 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:34:13.827 nvme0n1 : 1.01 8895.01 34.75 0.00 0.00 14285.90 6963.20 21517.65 00:34:13.827 =================================================================================================================== 00:34:13.827 Total : 8895.01 34.75 0.00 0.00 14285.90 6963.20 21517.65 00:34:13.827 0 00:34:13.827 19:49:39 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:13.827 19:49:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:13.827 19:49:39 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:34:13.827 19:49:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:13.827 19:49:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:13.827 19:49:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:13.827 19:49:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:13.827 19:49:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:14.087 19:49:40 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:34:14.087 19:49:40 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:34:14.087 19:49:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:14.087 19:49:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:14.087 19:49:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:14.088 19:49:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:14.088 19:49:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:14.088 19:49:40 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:34:14.088 19:49:40 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:14.088 19:49:40 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:34:14.088 19:49:40 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:14.088 19:49:40 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:34:14.348 19:49:40 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:14.348 19:49:40 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:34:14.348 19:49:40 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:14.348 19:49:40 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:14.348 19:49:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:14.348 [2024-05-15 19:49:40.469751] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:14.349 [2024-05-15 19:49:40.470105] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24733f0 (107): Transport endpoint is not connected 00:34:14.349 [2024-05-15 19:49:40.471098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24733f0 (9): Bad file descriptor 00:34:14.349 [2024-05-15 19:49:40.472099] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:14.349 [2024-05-15 19:49:40.472109] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:14.349 [2024-05-15 19:49:40.472116] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:14.349 request: 00:34:14.349 { 00:34:14.349 "name": "nvme0", 00:34:14.349 "trtype": "tcp", 00:34:14.349 "traddr": "127.0.0.1", 00:34:14.349 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:14.349 "adrfam": "ipv4", 00:34:14.349 "trsvcid": "4420", 00:34:14.349 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:14.349 "psk": "key1", 00:34:14.349 "method": "bdev_nvme_attach_controller", 00:34:14.349 "req_id": 1 00:34:14.349 } 00:34:14.349 Got JSON-RPC error response 00:34:14.349 response: 00:34:14.349 { 00:34:14.349 "code": -32602, 00:34:14.349 "message": "Invalid parameters" 00:34:14.349 } 00:34:14.349 19:49:40 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:34:14.349 19:49:40 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:14.349 19:49:40 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:14.349 19:49:40 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:14.349 19:49:40 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:34:14.349 19:49:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:14.349 19:49:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:14.349 19:49:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:14.349 19:49:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:14.349 19:49:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:14.609 19:49:40 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:34:14.609 19:49:40 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:34:14.609 19:49:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:14.609 19:49:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:14.609 19:49:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:14.609 19:49:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:14.609 19:49:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:14.869 19:49:40 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:34:14.869 19:49:40 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:34:14.869 19:49:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:15.129 19:49:41 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:34:15.129 19:49:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:34:15.129 19:49:41 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:34:15.129 19:49:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:15.129 19:49:41 keyring_file -- keyring/file.sh@77 -- # jq length 00:34:15.389 19:49:41 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:34:15.389 19:49:41 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.yJC3CE7YcH 00:34:15.389 19:49:41 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.yJC3CE7YcH 00:34:15.389 19:49:41 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:34:15.389 19:49:41 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.yJC3CE7YcH 00:34:15.389 19:49:41 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:34:15.389 19:49:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:15.389 19:49:41 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:34:15.389 19:49:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:15.389 19:49:41 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yJC3CE7YcH 00:34:15.389 19:49:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yJC3CE7YcH 00:34:15.650 [2024-05-15 19:49:41.692445] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.yJC3CE7YcH': 0100660 00:34:15.650 [2024-05-15 19:49:41.692467] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:34:15.650 request: 00:34:15.650 { 00:34:15.650 "name": "key0", 00:34:15.650 "path": "/tmp/tmp.yJC3CE7YcH", 00:34:15.650 "method": "keyring_file_add_key", 00:34:15.650 "req_id": 1 00:34:15.650 } 00:34:15.650 Got JSON-RPC error response 00:34:15.650 response: 00:34:15.650 { 00:34:15.650 "code": -1, 00:34:15.650 "message": "Operation not permitted" 00:34:15.650 } 00:34:15.650 19:49:41 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:34:15.650 19:49:41 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:15.650 19:49:41 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:15.650 19:49:41 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:15.650 19:49:41 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.yJC3CE7YcH 00:34:15.650 19:49:41 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yJC3CE7YcH 00:34:15.650 19:49:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yJC3CE7YcH 00:34:15.910 19:49:41 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.yJC3CE7YcH 00:34:15.910 19:49:41 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:34:15.910 19:49:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:15.910 19:49:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:15.910 19:49:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:15.910 19:49:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:15.910 19:49:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:16.170 19:49:42 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:34:16.170 19:49:42 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:16.170 19:49:42 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:34:16.170 19:49:42 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:16.170 19:49:42 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:34:16.170 19:49:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:16.170 19:49:42 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:34:16.170 19:49:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:16.170 19:49:42 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:16.170 19:49:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:16.170 [2024-05-15 19:49:42.322051] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.yJC3CE7YcH': No such file or directory 00:34:16.170 [2024-05-15 19:49:42.322066] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:34:16.170 [2024-05-15 19:49:42.322088] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:34:16.170 [2024-05-15 19:49:42.322095] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:16.170 [2024-05-15 19:49:42.322101] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:34:16.170 request: 00:34:16.170 { 00:34:16.170 "name": "nvme0", 00:34:16.170 "trtype": "tcp", 00:34:16.170 "traddr": "127.0.0.1", 00:34:16.170 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:16.170 "adrfam": "ipv4", 00:34:16.170 "trsvcid": "4420", 00:34:16.170 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:16.170 "psk": "key0", 00:34:16.170 "method": "bdev_nvme_attach_controller", 00:34:16.170 "req_id": 1 00:34:16.170 } 00:34:16.170 Got JSON-RPC error response 00:34:16.170 response: 00:34:16.170 { 00:34:16.170 "code": -19, 00:34:16.170 "message": "No such device" 00:34:16.170 } 00:34:16.170 19:49:42 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:34:16.170 19:49:42 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:16.170 19:49:42 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:16.170 19:49:42 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:16.170 19:49:42 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:34:16.170 19:49:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:16.430 19:49:42 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:16.430 19:49:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:16.430 19:49:42 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:16.430 19:49:42 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:16.430 19:49:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:16.430 19:49:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:16.430 19:49:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YBDJw5O6v4 00:34:16.430 19:49:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:16.430 19:49:42 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:16.430 19:49:42 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:34:16.430 19:49:42 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:16.430 19:49:42 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:34:16.430 19:49:42 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:34:16.430 19:49:42 keyring_file -- nvmf/common.sh@705 -- # python - 00:34:16.430 19:49:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YBDJw5O6v4 00:34:16.430 19:49:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YBDJw5O6v4 00:34:16.690 19:49:42 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.YBDJw5O6v4 00:34:16.690 19:49:42 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YBDJw5O6v4 00:34:16.690 19:49:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YBDJw5O6v4 00:34:16.690 19:49:42 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:16.690 19:49:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:16.949 nvme0n1 00:34:16.949 19:49:43 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:34:16.949 19:49:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:16.949 19:49:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:16.949 19:49:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:16.949 19:49:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:16.949 19:49:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:17.210 19:49:43 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:34:17.210 19:49:43 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:34:17.210 19:49:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:17.470 19:49:43 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:34:17.470 19:49:43 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:34:17.470 19:49:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:17.470 19:49:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:17.470 19:49:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:17.730 19:49:43 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:34:17.730 19:49:43 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:34:17.730 19:49:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:17.730 19:49:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:17.730 19:49:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:17.730 19:49:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:17.730 19:49:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:17.730 19:49:43 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:34:17.730 19:49:43 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:17.990 19:49:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:17.990 19:49:44 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:34:17.990 19:49:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:17.990 19:49:44 keyring_file -- keyring/file.sh@104 -- # jq length 00:34:18.250 19:49:44 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:34:18.250 19:49:44 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YBDJw5O6v4 00:34:18.250 19:49:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YBDJw5O6v4 00:34:18.510 19:49:44 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.WjZtWghQaH 00:34:18.510 19:49:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.WjZtWghQaH 00:34:18.770 19:49:44 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:18.770 19:49:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:19.029 nvme0n1 00:34:19.029 19:49:44 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:34:19.029 19:49:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:34:19.290 19:49:45 keyring_file -- keyring/file.sh@112 -- # config='{ 00:34:19.290 "subsystems": [ 00:34:19.290 { 00:34:19.290 "subsystem": "keyring", 00:34:19.290 "config": [ 00:34:19.290 { 00:34:19.290 "method": "keyring_file_add_key", 00:34:19.290 "params": { 00:34:19.290 "name": "key0", 00:34:19.290 "path": "/tmp/tmp.YBDJw5O6v4" 00:34:19.290 } 00:34:19.290 }, 00:34:19.290 { 00:34:19.290 "method": "keyring_file_add_key", 00:34:19.290 "params": { 00:34:19.290 "name": "key1", 00:34:19.290 "path": "/tmp/tmp.WjZtWghQaH" 00:34:19.290 } 00:34:19.290 } 00:34:19.290 ] 00:34:19.290 }, 00:34:19.290 { 00:34:19.290 "subsystem": "iobuf", 00:34:19.290 "config": [ 00:34:19.290 { 00:34:19.290 "method": "iobuf_set_options", 00:34:19.290 "params": { 00:34:19.290 "small_pool_count": 8192, 00:34:19.290 "large_pool_count": 1024, 00:34:19.290 "small_bufsize": 8192, 00:34:19.290 "large_bufsize": 135168 00:34:19.290 } 00:34:19.290 } 00:34:19.290 ] 00:34:19.290 }, 00:34:19.290 { 00:34:19.290 "subsystem": "sock", 00:34:19.290 "config": [ 00:34:19.290 { 00:34:19.290 "method": "sock_impl_set_options", 00:34:19.290 "params": { 00:34:19.290 "impl_name": "posix", 00:34:19.290 "recv_buf_size": 2097152, 00:34:19.290 "send_buf_size": 2097152, 00:34:19.290 "enable_recv_pipe": true, 00:34:19.290 "enable_quickack": false, 00:34:19.290 "enable_placement_id": 0, 00:34:19.290 "enable_zerocopy_send_server": true, 00:34:19.290 "enable_zerocopy_send_client": false, 00:34:19.290 "zerocopy_threshold": 0, 00:34:19.290 "tls_version": 0, 00:34:19.290 "enable_ktls": false 00:34:19.290 } 00:34:19.290 }, 00:34:19.290 { 00:34:19.290 "method": "sock_impl_set_options", 00:34:19.290 "params": { 00:34:19.290 "impl_name": "ssl", 00:34:19.290 "recv_buf_size": 4096, 00:34:19.290 "send_buf_size": 4096, 00:34:19.290 "enable_recv_pipe": true, 00:34:19.290 "enable_quickack": false, 00:34:19.290 "enable_placement_id": 0, 00:34:19.290 "enable_zerocopy_send_server": true, 00:34:19.290 "enable_zerocopy_send_client": false, 00:34:19.290 "zerocopy_threshold": 0, 00:34:19.290 "tls_version": 0, 00:34:19.290 "enable_ktls": false 00:34:19.290 } 00:34:19.290 } 00:34:19.290 ] 00:34:19.290 }, 00:34:19.290 { 00:34:19.290 "subsystem": "vmd", 00:34:19.290 "config": [] 00:34:19.290 }, 00:34:19.290 { 00:34:19.290 "subsystem": "accel", 00:34:19.290 "config": [ 00:34:19.290 { 00:34:19.290 "method": "accel_set_options", 00:34:19.290 "params": { 00:34:19.290 "small_cache_size": 128, 00:34:19.290 "large_cache_size": 16, 00:34:19.290 "task_count": 2048, 00:34:19.290 "sequence_count": 2048, 00:34:19.290 "buf_count": 2048 00:34:19.290 } 00:34:19.290 } 00:34:19.290 ] 00:34:19.290 }, 00:34:19.290 { 00:34:19.290 "subsystem": "bdev", 00:34:19.290 "config": [ 00:34:19.290 { 00:34:19.290 "method": "bdev_set_options", 00:34:19.290 "params": { 00:34:19.290 "bdev_io_pool_size": 65535, 00:34:19.290 "bdev_io_cache_size": 256, 00:34:19.290 "bdev_auto_examine": true, 00:34:19.290 "iobuf_small_cache_size": 128, 00:34:19.290 "iobuf_large_cache_size": 16 00:34:19.290 } 00:34:19.290 }, 00:34:19.290 { 00:34:19.290 "method": "bdev_raid_set_options", 00:34:19.290 "params": { 00:34:19.290 "process_window_size_kb": 1024 00:34:19.290 } 00:34:19.290 }, 00:34:19.290 { 00:34:19.290 "method": "bdev_iscsi_set_options", 00:34:19.290 "params": { 00:34:19.290 "timeout_sec": 30 00:34:19.290 } 00:34:19.290 }, 00:34:19.290 { 00:34:19.290 "method": "bdev_nvme_set_options", 00:34:19.290 "params": { 00:34:19.290 "action_on_timeout": "none", 00:34:19.290 "timeout_us": 0, 00:34:19.290 "timeout_admin_us": 0, 00:34:19.290 "keep_alive_timeout_ms": 10000, 00:34:19.290 "arbitration_burst": 0, 00:34:19.290 "low_priority_weight": 0, 00:34:19.290 "medium_priority_weight": 0, 00:34:19.290 "high_priority_weight": 0, 00:34:19.290 "nvme_adminq_poll_period_us": 10000, 00:34:19.290 "nvme_ioq_poll_period_us": 0, 00:34:19.290 "io_queue_requests": 512, 00:34:19.290 "delay_cmd_submit": true, 00:34:19.290 "transport_retry_count": 4, 00:34:19.290 "bdev_retry_count": 3, 00:34:19.290 "transport_ack_timeout": 0, 00:34:19.290 "ctrlr_loss_timeout_sec": 0, 00:34:19.290 "reconnect_delay_sec": 0, 00:34:19.290 "fast_io_fail_timeout_sec": 0, 00:34:19.290 "disable_auto_failback": false, 00:34:19.290 "generate_uuids": false, 00:34:19.290 "transport_tos": 0, 00:34:19.290 "nvme_error_stat": false, 00:34:19.290 "rdma_srq_size": 0, 00:34:19.290 "io_path_stat": false, 00:34:19.290 "allow_accel_sequence": false, 00:34:19.290 "rdma_max_cq_size": 0, 00:34:19.290 "rdma_cm_event_timeout_ms": 0, 00:34:19.290 "dhchap_digests": [ 00:34:19.290 "sha256", 00:34:19.290 "sha384", 00:34:19.290 "sha512" 00:34:19.290 ], 00:34:19.290 "dhchap_dhgroups": [ 00:34:19.290 "null", 00:34:19.290 "ffdhe2048", 00:34:19.290 "ffdhe3072", 00:34:19.290 "ffdhe4096", 00:34:19.290 "ffdhe6144", 00:34:19.290 "ffdhe8192" 00:34:19.290 ] 00:34:19.290 } 00:34:19.290 }, 00:34:19.290 { 00:34:19.290 "method": "bdev_nvme_attach_controller", 00:34:19.290 "params": { 00:34:19.290 "name": "nvme0", 00:34:19.290 "trtype": "TCP", 00:34:19.290 "adrfam": "IPv4", 00:34:19.290 "traddr": "127.0.0.1", 00:34:19.290 "trsvcid": "4420", 00:34:19.290 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:19.290 "prchk_reftag": false, 00:34:19.290 "prchk_guard": false, 00:34:19.290 "ctrlr_loss_timeout_sec": 0, 00:34:19.290 "reconnect_delay_sec": 0, 00:34:19.290 "fast_io_fail_timeout_sec": 0, 00:34:19.290 "psk": "key0", 00:34:19.290 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:19.290 "hdgst": false, 00:34:19.290 "ddgst": false 00:34:19.290 } 00:34:19.290 }, 00:34:19.290 { 00:34:19.290 "method": "bdev_nvme_set_hotplug", 00:34:19.290 "params": { 00:34:19.290 "period_us": 100000, 00:34:19.290 "enable": false 00:34:19.290 } 00:34:19.290 }, 00:34:19.290 { 00:34:19.290 "method": "bdev_wait_for_examine" 00:34:19.290 } 00:34:19.290 ] 00:34:19.290 }, 00:34:19.290 { 00:34:19.290 "subsystem": "nbd", 00:34:19.290 "config": [] 00:34:19.290 } 00:34:19.290 ] 00:34:19.290 }' 00:34:19.290 19:49:45 keyring_file -- keyring/file.sh@114 -- # killprocess 3852198 00:34:19.290 19:49:45 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3852198 ']' 00:34:19.290 19:49:45 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3852198 00:34:19.290 19:49:45 keyring_file -- common/autotest_common.sh@951 -- # uname 00:34:19.290 19:49:45 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:19.290 19:49:45 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3852198 00:34:19.290 19:49:45 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:19.290 19:49:45 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:19.291 19:49:45 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3852198' 00:34:19.291 killing process with pid 3852198 00:34:19.291 19:49:45 keyring_file -- common/autotest_common.sh@965 -- # kill 3852198 00:34:19.291 Received shutdown signal, test time was about 1.000000 seconds 00:34:19.291 00:34:19.291 Latency(us) 00:34:19.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:19.291 =================================================================================================================== 00:34:19.291 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:19.291 19:49:45 keyring_file -- common/autotest_common.sh@970 -- # wait 3852198 00:34:19.291 19:49:45 keyring_file -- keyring/file.sh@117 -- # bperfpid=3854141 00:34:19.291 19:49:45 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:34:19.291 19:49:45 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:34:19.291 "subsystems": [ 00:34:19.291 { 00:34:19.291 "subsystem": "keyring", 00:34:19.291 "config": [ 00:34:19.291 { 00:34:19.291 "method": "keyring_file_add_key", 00:34:19.291 "params": { 00:34:19.291 "name": "key0", 00:34:19.291 "path": "/tmp/tmp.YBDJw5O6v4" 00:34:19.291 } 00:34:19.291 }, 00:34:19.291 { 00:34:19.291 "method": "keyring_file_add_key", 00:34:19.291 "params": { 00:34:19.291 "name": "key1", 00:34:19.291 "path": "/tmp/tmp.WjZtWghQaH" 00:34:19.291 } 00:34:19.291 } 00:34:19.291 ] 00:34:19.291 }, 00:34:19.291 { 00:34:19.291 "subsystem": "iobuf", 00:34:19.291 "config": [ 00:34:19.291 { 00:34:19.291 "method": "iobuf_set_options", 00:34:19.291 "params": { 00:34:19.291 "small_pool_count": 8192, 00:34:19.291 "large_pool_count": 1024, 00:34:19.291 "small_bufsize": 8192, 00:34:19.291 "large_bufsize": 135168 00:34:19.291 } 00:34:19.291 } 00:34:19.291 ] 00:34:19.291 }, 00:34:19.291 { 00:34:19.291 "subsystem": "sock", 00:34:19.291 "config": [ 00:34:19.291 { 00:34:19.291 "method": "sock_impl_set_options", 00:34:19.291 "params": { 00:34:19.291 "impl_name": "posix", 00:34:19.291 "recv_buf_size": 2097152, 00:34:19.291 "send_buf_size": 2097152, 00:34:19.291 "enable_recv_pipe": true, 00:34:19.291 "enable_quickack": false, 00:34:19.291 "enable_placement_id": 0, 00:34:19.291 "enable_zerocopy_send_server": true, 00:34:19.291 "enable_zerocopy_send_client": false, 00:34:19.291 "zerocopy_threshold": 0, 00:34:19.291 "tls_version": 0, 00:34:19.291 "enable_ktls": false 00:34:19.291 } 00:34:19.291 }, 00:34:19.291 { 00:34:19.291 "method": "sock_impl_set_options", 00:34:19.291 "params": { 00:34:19.291 "impl_name": "ssl", 00:34:19.291 "recv_buf_size": 4096, 00:34:19.291 "send_buf_size": 4096, 00:34:19.291 "enable_recv_pipe": true, 00:34:19.291 "enable_quickack": false, 00:34:19.291 "enable_placement_id": 0, 00:34:19.291 "enable_zerocopy_send_server": true, 00:34:19.291 "enable_zerocopy_send_client": false, 00:34:19.291 "zerocopy_threshold": 0, 00:34:19.291 "tls_version": 0, 00:34:19.291 "enable_ktls": false 00:34:19.291 } 00:34:19.291 } 00:34:19.291 ] 00:34:19.291 }, 00:34:19.291 { 00:34:19.291 "subsystem": "vmd", 00:34:19.291 "config": [] 00:34:19.291 }, 00:34:19.291 { 00:34:19.291 "subsystem": "accel", 00:34:19.291 "config": [ 00:34:19.291 { 00:34:19.291 "method": "accel_set_options", 00:34:19.291 "params": { 00:34:19.291 "small_cache_size": 128, 00:34:19.291 "large_cache_size": 16, 00:34:19.291 "task_count": 2048, 00:34:19.291 "sequence_count": 2048, 00:34:19.291 "buf_count": 2048 00:34:19.291 } 00:34:19.291 } 00:34:19.291 ] 00:34:19.291 }, 00:34:19.291 { 00:34:19.291 "subsystem": "bdev", 00:34:19.291 "config": [ 00:34:19.291 { 00:34:19.291 "method": "bdev_set_options", 00:34:19.291 "params": { 00:34:19.291 "bdev_io_pool_size": 65535, 00:34:19.291 "bdev_io_cache_size": 256, 00:34:19.291 "bdev_auto_examine": true, 00:34:19.291 "iobuf_small_cache_size": 128, 00:34:19.291 "iobuf_large_cache_size": 16 00:34:19.291 } 00:34:19.291 }, 00:34:19.291 { 00:34:19.291 "method": "bdev_raid_set_options", 00:34:19.291 "params": { 00:34:19.291 "process_window_size_kb": 1024 00:34:19.291 } 00:34:19.291 }, 00:34:19.291 { 00:34:19.291 "method": "bdev_iscsi_set_options", 00:34:19.291 "params": { 00:34:19.291 "timeout_sec": 30 00:34:19.291 } 00:34:19.291 }, 00:34:19.291 { 00:34:19.291 "method": "bdev_nvme_set_options", 00:34:19.291 "params": { 00:34:19.291 "action_on_timeout": "none", 00:34:19.291 "timeout_us": 0, 00:34:19.291 "timeout_admin_us": 0, 00:34:19.291 "keep_alive_timeout_ms": 10000, 00:34:19.291 "arbitration_burst": 0, 00:34:19.291 "low_priority_weight": 0, 00:34:19.291 "medium_priority_weight": 0, 00:34:19.291 "high_priority_weight": 0, 00:34:19.291 "nvme_adminq_poll_period_us": 10000, 00:34:19.291 "nvme_ioq_poll_period_us": 0, 00:34:19.291 "io_queue_requests": 512, 00:34:19.291 "delay_cmd_submit": true, 00:34:19.291 "transport_retry_count": 4, 00:34:19.291 "bdev_retry_count": 3, 00:34:19.291 "transport_ack_timeout": 0, 00:34:19.291 "ctrlr_loss_timeout_sec": 0, 00:34:19.291 "reconnect_delay_sec": 0, 00:34:19.291 "fast_io_fail_timeout_sec": 0, 00:34:19.291 "disable_auto_failback": false, 00:34:19.291 "generate_uuids": false, 00:34:19.291 "transport_tos": 0, 00:34:19.291 "nvme_error_stat": false, 00:34:19.291 "rdma_srq_size": 0, 00:34:19.291 "io_path_stat": false, 00:34:19.291 "allow_accel_sequence": false, 00:34:19.291 "rdma_max_cq_size": 0, 00:34:19.291 "rdma_cm_event_timeout_ms": 0, 00:34:19.291 "dhchap_digests": [ 00:34:19.291 "sha256", 00:34:19.291 "sha384", 00:34:19.291 "sha512" 00:34:19.291 ], 00:34:19.291 "dhchap_dhgroups": [ 00:34:19.291 "null", 00:34:19.291 "ffdhe2048", 00:34:19.291 "ffdhe3072", 00:34:19.291 "ffdhe4096", 00:34:19.291 "ffdhe6144", 00:34:19.291 "ffdhe8192" 00:34:19.291 ] 00:34:19.291 } 00:34:19.291 }, 00:34:19.291 { 00:34:19.291 "method": "bdev_nvme_attach_controller", 00:34:19.291 "params": { 00:34:19.291 "name": "nvme0", 00:34:19.291 "trtype": "TCP", 00:34:19.291 "adrfam": "IPv4", 00:34:19.291 "traddr": "127.0.0.1", 00:34:19.291 "trsvcid": "4420", 00:34:19.291 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:19.291 "prchk_reftag": false, 00:34:19.291 "prchk_guard": false, 00:34:19.291 "ctrlr_loss_timeout_sec": 0, 00:34:19.291 "reconnect_delay_sec": 0, 00:34:19.291 "fast_io_fail_timeout_sec": 0, 00:34:19.291 "psk": "key0", 00:34:19.291 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:19.291 "hdgst": false, 00:34:19.291 "ddgst": false 00:34:19.291 } 00:34:19.291 }, 00:34:19.291 { 00:34:19.291 "method": "bdev_nvme_set_hotplug", 00:34:19.291 "params": { 00:34:19.291 "period_us": 100000, 00:34:19.291 "enable": false 00:34:19.291 } 00:34:19.291 }, 00:34:19.291 { 00:34:19.291 "method": "bdev_wait_for_examine" 00:34:19.291 } 00:34:19.291 ] 00:34:19.291 }, 00:34:19.291 { 00:34:19.291 "subsystem": "nbd", 00:34:19.291 "config": [] 00:34:19.291 } 00:34:19.291 ] 00:34:19.291 }' 00:34:19.291 19:49:45 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3854141 /var/tmp/bperf.sock 00:34:19.291 19:49:45 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3854141 ']' 00:34:19.291 19:49:45 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:19.291 19:49:45 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:19.291 19:49:45 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:19.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:19.291 19:49:45 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:19.291 19:49:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:19.551 [2024-05-15 19:49:45.475761] Starting SPDK v24.05-pre git sha1 7f5235167 / DPDK 23.11.0 initialization... 00:34:19.551 [2024-05-15 19:49:45.475819] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854141 ] 00:34:19.551 EAL: No free 2048 kB hugepages reported on node 1 00:34:19.551 [2024-05-15 19:49:45.539869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:19.551 [2024-05-15 19:49:45.603947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:19.811 [2024-05-15 19:49:45.742576] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:20.381 19:49:46 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:20.381 19:49:46 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:34:20.381 19:49:46 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:34:20.381 19:49:46 keyring_file -- keyring/file.sh@120 -- # jq length 00:34:20.381 19:49:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:20.381 19:49:46 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:34:20.381 19:49:46 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:34:20.381 19:49:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:20.381 19:49:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:20.381 19:49:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:20.381 19:49:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:20.381 19:49:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:20.640 19:49:46 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:34:20.640 19:49:46 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:34:20.640 19:49:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:20.640 19:49:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:20.640 19:49:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:20.640 19:49:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:20.640 19:49:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:20.899 19:49:46 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:34:20.899 19:49:46 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:34:20.899 19:49:46 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:34:20.900 19:49:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:34:21.159 19:49:47 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:34:21.159 19:49:47 keyring_file -- keyring/file.sh@1 -- # cleanup 00:34:21.159 19:49:47 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.YBDJw5O6v4 /tmp/tmp.WjZtWghQaH 00:34:21.159 19:49:47 keyring_file -- keyring/file.sh@20 -- # killprocess 3854141 00:34:21.159 19:49:47 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3854141 ']' 00:34:21.159 19:49:47 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3854141 00:34:21.159 19:49:47 keyring_file -- common/autotest_common.sh@951 -- # uname 00:34:21.159 19:49:47 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:21.159 19:49:47 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3854141 00:34:21.159 19:49:47 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:21.159 19:49:47 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:21.159 19:49:47 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3854141' 00:34:21.159 killing process with pid 3854141 00:34:21.159 19:49:47 keyring_file -- common/autotest_common.sh@965 -- # kill 3854141 00:34:21.159 Received shutdown signal, test time was about 1.000000 seconds 00:34:21.159 00:34:21.159 Latency(us) 00:34:21.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:21.159 =================================================================================================================== 00:34:21.159 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:21.159 19:49:47 keyring_file -- common/autotest_common.sh@970 -- # wait 3854141 00:34:21.418 19:49:47 keyring_file -- keyring/file.sh@21 -- # killprocess 3852152 00:34:21.418 19:49:47 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3852152 ']' 00:34:21.418 19:49:47 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3852152 00:34:21.418 19:49:47 keyring_file -- common/autotest_common.sh@951 -- # uname 00:34:21.418 19:49:47 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:21.418 19:49:47 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3852152 00:34:21.418 19:49:47 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:21.418 19:49:47 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:21.418 19:49:47 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3852152' 00:34:21.418 killing process with pid 3852152 00:34:21.418 19:49:47 keyring_file -- common/autotest_common.sh@965 -- # kill 3852152 00:34:21.419 [2024-05-15 19:49:47.438789] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:34:21.419 [2024-05-15 19:49:47.438826] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:34:21.419 19:49:47 keyring_file -- common/autotest_common.sh@970 -- # wait 3852152 00:34:21.678 00:34:21.678 real 0m12.656s 00:34:21.678 user 0m30.708s 00:34:21.678 sys 0m2.884s 00:34:21.678 19:49:47 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:21.678 19:49:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:21.678 ************************************ 00:34:21.678 END TEST keyring_file 00:34:21.678 ************************************ 00:34:21.678 19:49:47 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:34:21.678 19:49:47 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:34:21.678 19:49:47 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:34:21.678 19:49:47 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:34:21.678 19:49:47 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:34:21.678 19:49:47 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:34:21.678 19:49:47 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:34:21.678 19:49:47 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:34:21.678 19:49:47 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:34:21.678 19:49:47 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:34:21.678 19:49:47 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:34:21.678 19:49:47 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:34:21.678 19:49:47 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:34:21.678 19:49:47 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:34:21.678 19:49:47 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:34:21.678 19:49:47 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:34:21.678 19:49:47 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:34:21.678 19:49:47 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:34:21.678 19:49:47 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:21.679 19:49:47 -- common/autotest_common.sh@10 -- # set +x 00:34:21.679 19:49:47 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:34:21.679 19:49:47 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:34:21.679 19:49:47 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:34:21.679 19:49:47 -- common/autotest_common.sh@10 -- # set +x 00:34:29.813 INFO: APP EXITING 00:34:29.813 INFO: killing all VMs 00:34:29.813 INFO: killing vhost app 00:34:29.813 INFO: EXIT DONE 00:34:33.109 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:34:33.109 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:34:33.109 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:34:33.109 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:34:33.109 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:34:33.109 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:34:33.109 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:34:33.109 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:34:33.109 0000:65:00.0 (144d a80a): Already using the nvme driver 00:34:33.109 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:34:33.109 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:34:33.109 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:34:33.109 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:34:33.370 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:34:33.370 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:34:33.370 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:34:33.370 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:34:37.574 Cleaning 00:34:37.574 Removing: /var/run/dpdk/spdk0/config 00:34:37.574 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:37.574 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:37.574 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:37.574 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:37.574 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:37.574 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:37.574 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:37.574 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:37.574 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:37.574 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:37.574 Removing: /var/run/dpdk/spdk1/config 00:34:37.574 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:37.574 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:37.574 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:37.574 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:37.574 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:37.574 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:37.574 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:37.574 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:37.574 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:37.574 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:37.574 Removing: /var/run/dpdk/spdk1/mp_socket 00:34:37.574 Removing: /var/run/dpdk/spdk2/config 00:34:37.574 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:37.574 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:37.574 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:37.574 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:37.574 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:37.574 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:37.574 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:37.574 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:37.574 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:37.574 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:37.574 Removing: /var/run/dpdk/spdk3/config 00:34:37.574 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:37.574 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:37.574 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:37.574 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:37.574 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:37.574 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:37.574 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:37.574 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:37.574 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:37.574 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:37.574 Removing: /var/run/dpdk/spdk4/config 00:34:37.574 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:37.574 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:37.574 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:37.574 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:37.574 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:37.574 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:37.574 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:37.574 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:37.574 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:37.574 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:37.574 Removing: /dev/shm/bdev_svc_trace.1 00:34:37.574 Removing: /dev/shm/nvmf_trace.0 00:34:37.574 Removing: /dev/shm/spdk_tgt_trace.pid3362929 00:34:37.574 Removing: /var/run/dpdk/spdk0 00:34:37.574 Removing: /var/run/dpdk/spdk1 00:34:37.574 Removing: /var/run/dpdk/spdk2 00:34:37.574 Removing: /var/run/dpdk/spdk3 00:34:37.574 Removing: /var/run/dpdk/spdk4 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3361287 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3362929 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3363668 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3364754 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3365296 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3366705 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3366955 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3367245 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3368411 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3368986 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3369371 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3369761 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3370168 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3370557 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3370892 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3371044 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3371328 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3372596 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3376061 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3376413 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3376706 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3377034 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3377413 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3377693 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3378119 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3378298 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3378579 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3378839 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3379077 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3379217 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3379717 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3380008 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3380401 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3380769 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3380794 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3380984 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3381215 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3381565 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3381914 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3382263 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3382542 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3382733 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3383007 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3383359 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3383706 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3384061 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3384272 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3384479 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3384802 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3385154 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3385510 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3385841 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3386046 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3386272 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3386604 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3386960 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3387051 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3387448 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3392570 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3451226 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3456948 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3470192 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3477281 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3482658 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3483341 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3498717 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3498719 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3499725 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3500727 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3501735 00:34:37.574 Removing: /var/run/dpdk/spdk_pid3502412 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3502442 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3502749 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3503015 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3503081 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3504093 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3505093 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3506106 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3506776 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3506784 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3507119 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3508554 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3509950 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3521009 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3521503 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3527152 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3534695 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3537790 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3551229 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3563020 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3565025 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3566089 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3588939 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3594005 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3627676 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3633751 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3635750 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3637770 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3637794 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3638124 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3638459 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3638849 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3640968 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3641927 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3642305 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3645015 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3645719 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3646431 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3651852 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3665106 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3670502 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3678397 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3679903 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3681465 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3687193 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3692792 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3702879 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3702966 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3708609 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3708761 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3709088 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3709652 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3709738 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3715471 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3716172 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3721833 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3725749 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3732812 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3739877 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3750944 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3760064 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3760066 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3784975 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3785654 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3786281 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3786857 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3787744 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3788420 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3789094 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3789720 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3795178 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3795513 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3803197 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3803363 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3806085 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3813612 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3813709 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3820387 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3822755 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3824980 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3826571 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3829237 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3830753 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3841620 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3842285 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3842957 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3846023 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3846546 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3847043 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3852152 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3852198 00:34:37.575 Removing: /var/run/dpdk/spdk_pid3854141 00:34:37.575 Clean 00:34:37.575 19:50:03 -- common/autotest_common.sh@1447 -- # return 0 00:34:37.575 19:50:03 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:34:37.575 19:50:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:37.575 19:50:03 -- common/autotest_common.sh@10 -- # set +x 00:34:37.835 19:50:03 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:34:37.835 19:50:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:37.835 19:50:03 -- common/autotest_common.sh@10 -- # set +x 00:34:37.835 19:50:03 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:37.835 19:50:03 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:37.835 19:50:03 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:37.835 19:50:03 -- spdk/autotest.sh@387 -- # hash lcov 00:34:37.835 19:50:03 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:34:37.835 19:50:03 -- spdk/autotest.sh@389 -- # hostname 00:34:37.835 19:50:03 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:37.835 geninfo: WARNING: invalid characters removed from testname! 00:35:04.414 19:50:27 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:05.357 19:50:31 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:07.903 19:50:33 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:09.816 19:50:35 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:12.408 19:50:38 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:14.323 19:50:40 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:16.868 19:50:42 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:16.868 19:50:42 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:16.868 19:50:42 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:16.868 19:50:42 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:16.868 19:50:42 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:16.868 19:50:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.868 19:50:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.868 19:50:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.868 19:50:42 -- paths/export.sh@5 -- $ export PATH 00:35:16.868 19:50:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.868 19:50:42 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:35:16.868 19:50:42 -- common/autobuild_common.sh@437 -- $ date +%s 00:35:16.868 19:50:42 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715795442.XXXXXX 00:35:16.868 19:50:42 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715795442.tdmeJF 00:35:16.868 19:50:42 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:35:16.868 19:50:42 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:35:16.868 19:50:42 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:35:16.868 19:50:42 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:35:16.868 19:50:42 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:35:16.868 19:50:42 -- common/autobuild_common.sh@453 -- $ get_config_params 00:35:16.868 19:50:42 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:35:16.868 19:50:42 -- common/autotest_common.sh@10 -- $ set +x 00:35:16.868 19:50:42 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:35:16.868 19:50:42 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:35:16.868 19:50:42 -- pm/common@17 -- $ local monitor 00:35:16.868 19:50:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:16.868 19:50:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:16.868 19:50:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:16.868 19:50:42 -- pm/common@21 -- $ date +%s 00:35:16.868 19:50:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:16.868 19:50:42 -- pm/common@25 -- $ sleep 1 00:35:16.868 19:50:42 -- pm/common@21 -- $ date +%s 00:35:16.868 19:50:42 -- pm/common@21 -- $ date +%s 00:35:16.868 19:50:42 -- pm/common@21 -- $ date +%s 00:35:16.868 19:50:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715795442 00:35:16.868 19:50:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715795442 00:35:16.869 19:50:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715795442 00:35:16.869 19:50:42 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715795442 00:35:16.869 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715795442_collect-vmstat.pm.log 00:35:16.869 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715795442_collect-cpu-load.pm.log 00:35:16.869 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715795442_collect-cpu-temp.pm.log 00:35:16.869 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715795442_collect-bmc-pm.bmc.pm.log 00:35:17.812 19:50:43 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:35:17.812 19:50:43 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:35:17.812 19:50:43 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:17.812 19:50:43 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:35:17.812 19:50:43 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:35:17.812 19:50:43 -- spdk/autopackage.sh@19 -- $ timing_finish 00:35:17.812 19:50:43 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:17.812 19:50:43 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:35:17.812 19:50:43 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:17.812 19:50:43 -- spdk/autopackage.sh@20 -- $ exit 0 00:35:17.812 19:50:43 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:35:17.812 19:50:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:35:17.812 19:50:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:35:17.812 19:50:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:17.812 19:50:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:35:17.812 19:50:43 -- pm/common@44 -- $ pid=3866835 00:35:17.812 19:50:43 -- pm/common@50 -- $ kill -TERM 3866835 00:35:17.812 19:50:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:17.812 19:50:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:35:17.812 19:50:43 -- pm/common@44 -- $ pid=3866836 00:35:17.812 19:50:43 -- pm/common@50 -- $ kill -TERM 3866836 00:35:17.812 19:50:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:17.812 19:50:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:35:17.812 19:50:43 -- pm/common@44 -- $ pid=3866838 00:35:17.812 19:50:43 -- pm/common@50 -- $ kill -TERM 3866838 00:35:17.812 19:50:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:17.812 19:50:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:35:17.812 19:50:43 -- pm/common@44 -- $ pid=3866865 00:35:17.812 19:50:43 -- pm/common@50 -- $ sudo -E kill -TERM 3866865 00:35:17.812 + [[ -n 3236947 ]] 00:35:17.812 + sudo kill 3236947 00:35:17.822 [Pipeline] } 00:35:17.839 [Pipeline] // stage 00:35:17.843 [Pipeline] } 00:35:17.862 [Pipeline] // timeout 00:35:17.867 [Pipeline] } 00:35:17.885 [Pipeline] // catchError 00:35:17.890 [Pipeline] } 00:35:17.909 [Pipeline] // wrap 00:35:17.916 [Pipeline] } 00:35:17.931 [Pipeline] // catchError 00:35:17.941 [Pipeline] stage 00:35:17.943 [Pipeline] { (Epilogue) 00:35:17.958 [Pipeline] catchError 00:35:17.960 [Pipeline] { 00:35:17.976 [Pipeline] echo 00:35:17.978 Cleanup processes 00:35:17.984 [Pipeline] sh 00:35:18.274 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:18.274 3866946 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:35:18.274 3867384 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:18.289 [Pipeline] sh 00:35:18.578 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:18.578 ++ grep -v 'sudo pgrep' 00:35:18.578 ++ awk '{print $1}' 00:35:18.578 + sudo kill -9 3866946 00:35:18.592 [Pipeline] sh 00:35:18.880 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:33.804 [Pipeline] sh 00:35:34.093 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:34.093 Artifacts sizes are good 00:35:34.108 [Pipeline] archiveArtifacts 00:35:34.115 Archiving artifacts 00:35:34.309 [Pipeline] sh 00:35:34.626 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:34.641 [Pipeline] cleanWs 00:35:34.650 [WS-CLEANUP] Deleting project workspace... 00:35:34.650 [WS-CLEANUP] Deferred wipeout is used... 00:35:34.658 [WS-CLEANUP] done 00:35:34.660 [Pipeline] } 00:35:34.679 [Pipeline] // catchError 00:35:34.691 [Pipeline] sh 00:35:34.976 + logger -p user.info -t JENKINS-CI 00:35:34.986 [Pipeline] } 00:35:35.001 [Pipeline] // stage 00:35:35.007 [Pipeline] } 00:35:35.024 [Pipeline] // node 00:35:35.029 [Pipeline] End of Pipeline 00:35:35.056 Finished: SUCCESS